The Open-Source Ecosystem Outruns Tech Giants: A Shift in AI Landscape

A leaked document titled “We Have No Moat and neither does OpenAI” is causing ripples within the AI community.

Though its origin is associated with Google, the document stands out for its in-depth analysis and intriguing argument that its content is profoundly thought-provoking, regardless of whether it originates from Google.

πŸ’‘ Quote: β€œBut the uncomfortable truth is, we aren’t positioned to win this arms race and neither is OpenAI. While we’ve been squabbling, a third faction has been quietly eating our lunch.”

The document contends that the open-source community is already outpacing tech giants such as Google and OpenAI in the race to develop ever-larger language models such as Llama, AutoGPT, BabyAGI, or MPT-7B — to name just a few.

This article expands on the leaked Google document and a recent interview with AI scientist and futurist Simon Willison.

The Open-Source Leap

Recent developments suggest a significant acceleration within the open-source community.

In a span of a couple of months, open-source models such as Facebook Lama, Stanford Alpaca, and Vicuna have demonstrated the pace and quality of open-source advancements. Though these models might not yet reach the capabilities of GPT-3.5 or GPT-4, they are closing in.

πŸ§‘β€πŸ’» Quick History Lesson: Facebook Lama, released on March 15th, started the trend by exhibiting capabilities parallel to ChatGPT. This was followed by Stanford Alpaca, which demonstrated a leap in quality by fine-tuning on Lama. Soon after, Vicuna emerged as a remarkable model that could run on regular hardware, including mobile phones.

The document argues that Google and OpenAI’s strategy of retraining their giant models from scratch every few months might be less beneficial than the open-source community’s approach of fine-tuning smaller models. Fine-tuning smaller models on regular laptops can be done in a couple of hours, thus accelerating innovation.

πŸ§‘β€πŸ’» There’s a Cambrian explosion of open-source software in LLM research. Nobody can stop it. Nobody can keep up with it. It will create new genetic mutations that far outpace the models created by BigTech. As a big corporation, you need to be aware that there will always be more talent outside your organization than within.

Open Vs Closed Ecosystem

The shift has led to a debate over the effectiveness of an open-source versus a closed ecosystem. Ima Mok, CEO of Stability, proposed that closed alternatives can learn from open-source solutions. This leaked document challenges this notion, suggesting that the open-source community might now be moving faster than its closed-source counterparts.

The open-source ecosystem can simultaneously accommodate countless researchers worldwide innovating, possibly addressing the problems tech giants are trying to solve. This makes competing with open-source models an increasingly difficult task for the closed ecosystem.

On-device Models and Future Possibilities

Another remarkable point of discussion is the potential of running these models on everyday devices:

πŸ€– Example: MiniGPT-4: The Latest Breakthrough in Language Generation Technology

The open-source Vicuna 13B, derived from Facebook Lama, can run directly in the browser using webGPU. Such a feat was unimaginable until recently. With the availability of various model shrinking techniques, including quantization and pruning, powerful AI models can operate on devices that people carry in their pockets.

🀯 Imagine putting LLMs into every object in your environment so that every object in your everyday life becomes intelligent in an instant. And as the open-source research continues, the intelligence (read: problem-solving capability) of our environment sits on an exponentially increasing curve.

Despite the compelling progress of the open-source community, Alessio Fanelli cautions that the race isn’t solely about creating the best model but also about building the best product around it. He argues that the cost of running infrastructure for these models at a large scale can still be prohibitive.

Nevertheless, the AI landscape is undeniably evolving. Open-source models are reshaping the industry by democratizing access to AI, stimulating faster innovation, and potentially transforming the competitive dynamics among tech giants.

The Impact of Lower Rank Adaptation (LoRA) and How It Reshapes AI

LoRA is an optimization technique wherein instead of retraining an entire model, a portion of the weight matrix (one of the ranks) is selected, divided into two smaller matrices, and then retrained.

This results in significant savings of computational resources, essentially freezing part of the model while fine-tuning a smaller, more manageable section. This method has been leveraged in the ‘Stable Diffusion’ community and has found a notable application in the training of AI on particular concepts.

This advancement has spurred a discourse around the defensibility of AI giants like Google and their hold on the AI ecosystem. The common consensus is that the open-source model offers numerous advantages.

Google possesses moats like Gmail, Google Docs, and Google Calendar. But not in AI and LLM research. They are trailing behind in the open-source space. The talent drain from Google to open-source has raised serious concerns about Google’s ability to retain its leading position.

A critical advantage of open-source models is the collaborative development it allows. Bazaars not Cathedrals! With a universally agreed-upon base model, multiple adaptations can be built upon it, increasing compatibility and functionality.

This has further implications for the development of newer models or architectures. Despite the potential superiority of newer models, the extensive work and investment that have gone into the development of existing architectures like transformers might pose a deterrent to a switch.

Data management is another crucial aspect as data loops will be increasingly important in improving models. Data isn’t a big challenge with the advent of projects like Red Pajama, which trains an alternative to LAMA entirely on openly licensed data. This supports the idea that large corporations with considerable resources may not have the upper hand they once did, as smaller groups or individuals can now build on existing models, making the AI field much more democratic.

Mojo – A New Programming Language for AI

Mojo is a new programming language announced by Chris Lattner’s team.

Notable for its incredible optimization capabilities, Mojo is a super-set of Python, which means it is fully compatible with Python but comes with additional performance-oriented features.

One of the main benefits of Mojo is that it can hyper-optimize Python code without requiring developers to reprogram everything from scratch. In a demonstration of its capabilities, a simple tweak in Mojo was able to amplify Python’s matrix multiplication performance by 2000x, showcasing the extraordinary possibilities this language has to offer.

Mojo seems to have successfully combined the ease of high-level languages with deep performance optimization. Jeremy Howard, the creator of Fast.ai, has expressed immense excitement about Mojo, going as far as calling it the “biggest programming language advancement in decades”.

Mojo’s incremental adoption potential makes it a game-changer for large enterprises and researchers with extensive Python backgrounds. However, since the language is still in its initial stages, its adoption path and full potential remain to be seen.

This shift towards optimization beyond just models is critical. With the AI field reaching a point where model improvements are marginal, there’s a greater focus on better runtimes, languages, and tooling. The advent of Mojo represents a promising development in programming language innovation, focusing on practical enhancements instead of merely enhancing functional strength and types.

While the AI community waits for Mojo’s full launch, the discussion also brought up the possibility of Facebook (now Meta) open-sourcing the weights of its LLaMA model, which could have massive implications for open-source innovation in the field.

As the focus of AI evolution shifts towards runtime optimization and advanced languages, it will be interesting to observe how new developments like Mojo reshape the landscape of high-performance computing and AI.

πŸš€ Recommended: Did ChatGPT Just Kill Freelancing? 😡