6 Easiest Ways to Get Started with Llama2: Meta’s Open AI Model

With the unexpected release of its advanced language model, LLaMA 2, Meta has opened up new avenues for commercial and research applications. Unlike its predecessor, this second iteration of the model is (primarily) free and open-source.

πŸ§‘β€πŸ’» Recommended: Llama 2: How Meta’s Free Open-Source LLM Beats GPT-4!

The source code and data that forms the foundation of LLaMA 2 are readily available for downloading and utilization at no cost.

Here’s one of Llama2’s creations:

Haha, this is fun. I’d agree, but probably it’s just hallucinating, i.e., making up something I want to hear. 😊

πŸ™… Restrictions: Platforms boasting over 700 million monthly active users would need to acquire a license. Additionally, the model cannot be employed to augment or refine other extensive language models.

LLaMA 2 is Meta’s refined version of their initial 65-billion-parameter large language model, LLaMA, which was only available for non-commercial research purposes. The new version was trained on 40% more data, and fine-tuned versions with parameters varying from 7 billion to 70 billion are available.

Here are five ways you can start using LLaMA 2:

Method 1: Download Llama 2 from Meta

You can access any of the Llama 2 models directly from Meta’s platform, but you must first provide personal details and agree to the community license agreement and acceptable use policy.

Following this, you will receive an email with a unique URL to initiate the GitHub download, which is valid for 24 hours.

Method 2: Download LLama 2 from Huggingface

Alternatively, you can download the models from Hugging Face, a collaborative platform for machine learning models and datasets, after receiving access approval from Meta. For users based in the US, Amazon Web Services provides access via SageMaker JumpStart.

Method 3: Integrate LLaMA 2 on Microsoft Azure

As a part of a partnership with Microsoft, LLaMA 2 is also accessible via the Azure cloud computing platform. Azure subscribers can find LLaMA 2 in the Azure AI model catalog and build applications with Microsoft’s added safety features and content filtering tools.

Method 4: Execute LLaMA 2 using Replicate’s API

Replicate, a platform that enables running machine learning models with limited coding knowledge, offers Llama 2 trial prompts. It facilitates fine-tuning and executing models in the cloud without the need for setting up GPUs. Think of it as “AI inference as a service”.

Just for fun, I asked it to create a poem on Finxter. See how Llama2 created a beautiful and accurate πŸ§‘β€πŸ’» poem on Finxter: πŸ‘‡

Guides to run the models through APIs using Node.js, Python, or HTTP are provided.

For those lacking coding skills but curious about LLaMA 2’s capabilities, there are simpler options.

Method 5: Engage with LLaMA 2 via online chat

VC firm Andreessen Horowitz has established a LLaMA 2 chatbot at llama2.ai, an independent demo that allows non-technical users to interact with the AI. Like with ChatGPT, users can submit questions or text generation requests and can switch between β€˜balanced’, β€˜creative’, and β€˜precise’ chat modes.

Note that this uses Streamlit, you could set up your own Streamlit app in a day or so and preload it with the Llama2 model.

See here for a quick guide on Streamlit:

πŸ’‘ Recommended: How I Created an Audiobook App with Streamlit

Method 6: Test LLaMA 2’s chat features with Perplexity.ai

Perplexity.ai, a chatbot designed to function like a search engine, has its own LLaMA chatbot at llama.perplexity.ai. Here, users can switch between the 13-billion-parameter and 7-billion-parameter models to compare results.