I Tried Berkeley’s 🦍 Gorilla Large Language Model

4.7/5 - (8 votes)

UC Berkeley researchers just published a new paper and website 🦍 Gorilla: Large Language Model Connected with Massive APIs that essentially translates English language to API calls.

To keep it simple, here’s my layman’s explanation of what the model is providing to you:

Input: An English language query.
Output: An API call (=code) that’s most relevant to your query.

In other words, Gorilla solves a specific problem in coding, i.e., the ability to correctly issue API commands without argument errors which I’m sure you’ve encountered as well when using LLMs such as ChatGPT.

UC Berkeley’s Gorilla LLM is a large language model that is augmented with APIs from Torch Hub, TensorFlow Hub, and HuggingFace.

It is designed to translate English language to API calls and outperforms GPT-4, Chat-GPT, and Claude.

The integration of the retrieval system with Gorilla demonstrates the potential for LLMs to use tools more accurately, keep up with frequently updated documentation, and consequently increase the reliability and applicability of their outputs.

Gorilla’s code, model, data, and demo are available at https://gorilla.cs.berkeley.edu (source: arXiv.org).

I tried it with a couple of queries in the provided notebook and it looks promising although the output looks a bit cryptic.

# initialization see https://colab.research.google.com/drive/1DEBPsccVLF_aUnmD0FwPeHFrtdC0QIUP
# ...

prompt = "I would like to create an app that finds popular threads on Reddit."
print(get_gorilla_response(prompt, model="gorilla-7b-th-v0"))


{'domain': 'Classification', 'api_call': 'model = torch.hub.load('pytorch/fairseq', 'roberta.large', pretrained=True)', 'api_provider': 'PyTorch', 'explanation': 'Load the pretrained RoBERTa model from PyTorch Hub, specifically the RoBERTa model with a 300M parameter count, which can be fine-tuned for Reddit thread classification.', 'code': 'import torch\nroberta = torch.hub.load('pytorch/fairseq', 'roberta.large', pretrained=True)'}"

You can try it yourself here.

The website provides the following example demo video (source):

You can use the Gorilla LLM just like any other LLM and even integrate it in meta tools such as Langchain or Auto-GPT.

👉 Recommended: Auto-GPT vs Langchain

In fact, I believe that this model may be a good fit to be integrated by a meta-model so API calls will just work seamlessly and you won’t even realize that you use Gorilla. If you’re like me, you probably don’t want to maintain hundreds of different specific LLMs for all your specific coding tasks anyway.

If you’re just a user, I recommend you ignore this model because its specific superpowers will just be integrated into the overall progress of the field and your API calls will automagically get better.

If you’re an LLM researcher, however, you should dive deeper into the paper’s specific mechanisms to integrate API documentation into the LLaMA model via fine-tuning.

💡 Recommended: Choose the Best Open-Source LLM with This Powerful Tool

OpenAI Glossary Cheat Sheet (100% Free PDF Download) 👇

Finally, check out our free cheat sheet on OpenAI terminology, many Finxters have told me they love it! ♥️

💡 Recommended: OpenAI Terminology Cheat Sheet (Free Download PDF)