Med-PaLM 2: Will This Google Research Help You Increase Your Healthspan?

Executive Summary Before we dive into Med-PaLM 2, let’s learn about the underlying technology: PaLM 2: πŸ‘‡ What Is PaLM 2? Google’s PaLM 2 is a next-generation language model with enhanced multilingual understanding, reasoning, and coding capabilities. The Advent of PaLM 2 We recently published an article about the fact that Google actually developed the … Read more

Top 10 Python Libraries to Create Your Telegram Bot Easily (GitHub)

As a Python developer interested in building Telegram bots for pure fun and enjoyment, I bring you my curated list of top Python libraries that powerfully streamline bot creation. 1. python-telegram-bot The python-telegram-bot is one of the most straightforward libraries for building bots for the Telegram app. Easy installation: πŸ‘‡ pip install python-telegram-bot –upgrade It … Read more

Ten Sustainable Career Paths in the Post-AI Economy

In this exploration, I’ve handpicked ten resilient careers for you. These jobs aren’t just surviving in the wake of AIβ€”they’re thriving, proving that the human touch remains invaluable in our evolving digital world. Let’s dive in and discover how these roles are carving out a sustainable future in the post-AI economy. Short Overview Job Short … Read more

Towards Reverse Engineering Matplotlib Code From Images

I tried a few helpful applications of Google Bards Image Recognition capabilities for coders. I don’t know about you but I often see beautiful plots (e.g., in research papers or data science reports) and wonder how I could recreate them. Well, Google Bard to the rescue! βœ… Reverse Engineer Exponential Plot in 2D First, let’s … Read more

No, GPT-4 Doesn’t Get Worse Over Time (FUD Debunked)

There has been a lot of drama on Twitter about the new Stanford UC Berkely collab paper titled “How Is ChatGPT’s Behavior Changing over Time?” (source) The paper’s authors selectively provide examples where newer versions of GPT-4 seem to perform “worse” and have “formatting mistakes” than older versions. The first evaluation they provide is the … Read more

Llama 2: How Meta’s Free Open-Source LLM Beats GPT-4!

Meta (formerly Facebook) has released Llama 2, a new large language model (LLM) that is trained on 40% more training data and has twice the context length, compared to its predecessor Llama. Llama 2 is open-source so researchers and hobbyist can build their own applications on top of it. Llama 2 is trained on a … Read more

The AI Fact Generator: A Tapping Party for Your Brain

This AI Fact Generator will randomly generate a fact about artificial intelligence, machine learning, or data science. Click the button to see a new fact! πŸ‘‡ Change Fact πŸ’‘ Fact: Want to learn more about AI, ML, and DS? Join our free email academy with 150,000+ readers and download my simple coding cheat sheet. You’ll … Read more

Google Bard Extracts Code From Images (PNG, JPEG, WebP)

Consider the following screenshot of a code snippet: Let’s say I want to extract the code from the image. I don’t want to type it out. What can I do? I just discovered that Google Bard has sufficient image recognition skills that you can upload images and ask it to extract the code from the … Read more

50 Activities a Humanoid Bot Could Do [Table]

I just completed an article on the possible stock price for Tesla if the Optimus bot reaches world domination. See here: πŸ“ˆ Recommended: Tesla Bot Valuation: $15,625 per Share Possible? To support the argument, here is the table of 50 activities that a humanoid robot could do: Activity Hourly Rate ($) Difficulty Required Skills Energy … Read more

Tesla Bot Valuation: $15,625 per Share Possible?

I’ll present my personal valuation thesis for the Tesla stock in this article, assuming the Optimus program will succeed. My optimism caused me to put ~80% of my stock investment portfolio into TSLA. Track all markets on TradingView Learning From the Smartphone Market 200 million people decide to buy an iPhone every year (source): 1.5 … Read more

Microsoft Scales LLMs to a Mind-Boggling 1B (!) Token Context 🀯

The paper “LongNet: Scaling Transformers to 1,000,000,000 tokens” presents a machine learning breakthrough, particularly in handling and analyzing large amounts of text data. Simply put, this paper is about a new model called LongNet that can understand and analyze really long strings of text – up to 1 billion words or phrases, called “tokens,” at … Read more

How to Run Large Language Models (LLMs) in Your Command Line?

LLM is a command-line utility and Python library for interacting with large language models. In the latest release (v0.5), it offers support for self-hosted language models through plugins. Installation LLM can be installed using pip, pipx, or Homebrew. The syntax for each is as follows: Key Features You can see the possible models by running … Read more