Welcome back to part 2, where we will be implementing the ChatGPT API into our project.
This is part 1 of the following series:
- Part 0 π Giggle – Creating a Joke Search Engine from Scratch with ChatGPT (0/6)
- Part 1 π Giggle GPT Joke Search Engine – Basic Setup (1/6)
- Part 2 π Giggle GPT Joke Search Engine – Implementing the ChatGPT API (2/6)
- Part 3 π Giggle GPT Joke Search Engine – Django ORM & Data Saving (3/6)
- Part 4 π Giggle GPT Joke Search Engine – Separate History Route & Delete Items (4/6)
- Part 5 π Giggle GPT Joke Search Engine – Implementing Pagination (5/6)
- Part 6 π Giggle GPT Joke Search Engine – ChatGPT Productivity & LLM Security (6/6)
π Full 6-video course with downloadable PDF certificates: Giggle – Creating a Joke Search Engine from Scratch with ChatGPT
First of all, make sure your virtual environment is active, as described at the start of part 1.
Linux(bash):
$ source venv/Scripts/activate
Windows:
$ venv/Scripts/activate (uses the.bat file)
You should see (venv) in your terminal before or on the current line.
First of all, we will have to create an account at https://chat.openai.com/, if you do not yet have one you can create one quickly using your Google account.
After you have created an account and are logged in, go to https://platform.openai.com/account/api-keys and make sure to create and copy a new API key. Click 'create new secret key'
, and save your key somewhere safe. Anyone with this key can use your API and the cost will be added to your bill, so if you ever get a paid account this is bad!
Note that while this is in essence a paid account, we get a free trial with $5 automatically, which is more than enough for our purposes, but if you want to make a public-facing project you will need to keep this into account.
Each query will use a very small amount of money. We will be using the gpt-3.5-turbo
LLM model, which costs 0.2 cents per 1000 tokens, more on tokens later. Free alternatives are also becoming more and more readily available, but for this project the $5 trial is more than enough.
API key safety measures
How then do we use our API key in the code safely while making sure no one else can see it, even if we potentially want to share the source code in the future?
Before we start working with any API we need to follow basic safety practices when working with personal passwords, sensitive data, or API keys. NEVER EXPOSE PASSWORDS OR KEYS IN YOUR CODE. Not even temporarily. Ahem⦠I apologize for shouting there!
Start by creating a ‘.gitignore
‘ file in your base directory (the same level as the manage.py
file), like below.
> giggle (folder) > project (folder) manage.py .gitignore
Open this file with the name .gitignore
and insert the following lines into the file.
.env venv/ *.sqlite3 __pycache__/
This is for GIT version control exclusion. If you don’t know much about version control yet don’t worry about it for now.
Basically we’re saying we want these files excluded if we ever upload or share the code in a version control system like GitHub.
It will exclude any files by the name of '.env'
, which is where we will store our API key in a moment. We also added 'venv/'
to exclude the folder containing our virtual environment, the sqlite3
database, and any 'pycache/'
folders, which contain cache files and not our source code, and need not be included if we ever share this code via GIT.
Create another file in the main directory, in the same folder as the .gitignore
file, this time with the name '.env'
. Inside of this .env
file insert your API key using the following syntax.
CHATGPT_API_KEY=123lalalasuperdupersecretapikeygoeshere
This file can only hold key-value pairs, each on a new line, and make sure not to use spaces anywhere. Here is where we put the sensitive data to keep it out of our source code repositories.
This file now holds our secret API key and will be automatically excluded if we upload our code to version control (because anyone who downloads your code should use their own API key).
ChatGPT
π‘ ChatGPT is an LLM. A large language model is a type of artificial intelligence algorithm that uses deep learning techniques and huge amounts of data to summarize, generate and predict new content. It has basically copy pasted half the internet into its AI brain, and will try to come up with the next logical response based on whatever you ask it to do.
ChatGPT breaks your input down into smaller pieces called tokens, which are just small chunks of text. They help the model understand and process language.
There’s a limit to the number of tokens the model can handle, and longer conversations take more tokens and time to process. The more tokens you use the higher the cost, but unless you run a massive website with many many users the cost will be limited and your free trial account will go a long way.
Let’s try out our basic prompt setup.
Our aim is to get ChatGPT to generate jokes for us based on the user’s input. To achieve this we will first give ChatGPT some instructions as to what we want it to do. Please go to https://chat.openai.com and start a new chat.
Let’s try out the basic prompt setup for this project. Sometimes you will have to play around with the exact wording of your query and state very precisely what you want to make sure you get exactly that.
It’s quite literal, like coding. If you want a short reply, then state so, if you want a response at the grade 2 level which is easy to read for children, say so. It will not infer such details unless explicitly stated.
Let’s plop in our setup in the ChatGPT message box (You can play around with it or make your own variation if you like).
I will give you a subject each time for all the following prompts. You will return to me a joke, but it should not be too long (4 lines at most). Please do not provide an introduction like "Here's a joke for you" but get straight into the joke. Also the joke has to make logical sense, but word jokes are allowed.
So, we’re telling it in advance what to do the next time we prompt it. ChatGPT will respond something like:
Sure, I can provide you with short jokes based on the subjects you give me. Please go ahead and give me the first subject.
Don’t worry about this, this initial response will not be generated in our API call later. Go ahead and type in a fun subject for the joke, like 'jelly beans'
.
Why did the jelly bean go to school? To become a Smartie!
ChatGPT really likes word jokes, so yeah, you’ll be seeing a lot of those. You can experiment with different models and creativity settings later if you want, but for now, let’s get ChatGPT into our project.
Implementing the ChatGPT API into our Django application
First we need to install two Python modules.
pip install openai pip install python-decouple
The first install is the official openai
API Python module.
The python-decouple
package is to read the .env
variables. If deploying on a real server, you would set the secret values as environment variables on the server. For our development environment, we will use this module instead, which will allow us to easily retrieve our API key or other secret values from the .env
file.
Remember the .env
file is excluded in .gitignore
so your private passwords will not be uploaded to GitHub if you upload your code. Again, if you’re not familiar with GIT and version control yet, don’t worry about it for now. Just start with the good habit of following along and not putting sensitive keys or information in your code. Your future self will thank you for it!
Ok, so now let’s go and talk to the API. Inside your giggle folder, create a new file named apis.py
.
> giggle admin.py apis.py (create this file) apps.py ...
Open this new apis.py
file and place the following code inside.
from decouple import config import openai
These are just the basic imports of the two Python libraries we installed earlier. Now we have to set our API key in the openai
module.
openai.api_key = config("CHATGPT_API_KEY") JOKE_SETUP = "I will give you a subject each time for all the following prompts. You will return to me a joke, but it should not be too long (4 lines at most). Please do not provide an introduction like 'Here's a joke for you' but get straight into the joke."
Note we use a method called config and pass in the name of our variable in the .env
file. This will load the environment variable containing your key from the .env
file you created.
If someone else downloads your code (which will not include the .env
file) they simply make a .env
file of their own and paste their key inside. We also put the setup into a separate variable as it is quite long.
Now, below the preceding code, we will use the openai
module to make an API call.
result = openai.ChatCompletion.create( model="gpt-3.5-turbo", n=1, temperature=0.4, max_tokens=200, messages=[ {"role": "user", "content": JOKE_SETUP}, {"role": "user", "content": "Why do birds always poop on my car?"}, ] ) print(result)
We use the openai.ChatCompletion
to create a new API call. The model simply defines the model we use, we will start with gpt-3.5-turbo
. The n is the number of responses we want to generate, which is 1.
The temperature setting is a value between 0 and 2. This determines how deterministic the model is. What the LLM tries to do is predict which token should come next based on the previous token.
If we set it to 0 it will play it extremely safe and actually generate the exact same Joke for the same input every single time. If we set it higher it will consider lower chance probability tokens to add more creativity and more variety to its responses.
If we go too high we will start generating gibberish because too much variety and ‘less likely to be correct’ tokens are generated. Setting this all the way to 2 will just endlessly generate nonsense like:
Michaelangelo to his apprentice> Raphael, we near musselage entire-day-bo's mushroom+parsley and pile on only delussulous crispures at trohpomore-charigo!
So yeah⦠Be sure not to overdo it with this setting if you want an actual response!
The max_tokens
is a limit on how many tokens we want to use, 200 will do fine as we will generate relatively short jokes.
Finally, messages
is an array of dictionaries. We have to provide the role (which will be 'user'
) and then the content.
We have to provide a dictionary for each prompt so we will be passing in a dictionary with the initial setup and then a second one with the user’s query. We could also keep appending additional queries if the results were dependent on such information but for our purposes we will only need the single setup query in combination with the users’ chosen topic for every single API call.
Making the call
Now let’s make an API call by running this simple file. Make sure your Django server is not running (Close it using ctrl+c
) or open a second terminal window.
$ python giggle/apis.py
You will see something like the following in your terminal.
{ "choices": [ { "finish_reason": "stop", "index": 0, "message": { "content": "Why did the race car driver quit? He was tired of driving in circles.", "role": "assistant" } } ], "created": 1683458600, "id": "chatcmpl-7DWm0ch2wfOX78DdGd1MTpjXdqaBj", "model": "gpt-3.5-turbo-0301", "object": "chat.completion", "usage": { "completion_tokens": 16, "prompt_tokens": 72, "total_tokens": 88 } }
Note that API responses are generally in JSON (JavaScript object notation) which has to be parsed into an object before use, but the openai module has already done this for us so we can use the result straight away. Go ahead and change the print(result)
statement at the bottom of apis.py
as follows:
print(result.choices[0].message.content)
Now if you run the file again (don’t forget to save it) you will only extract the response we need.
Error handling and creating a function
There is a rate limit of 3 requests per minute on the API as we are using the free demo account. To prevent our server from crashing or this error from causing trouble we have to catch it using a try except
block.
We also want to import this functionality we just created in our views.py
file later, so we should wrap the code inside of a function we can import from other files. This gives us the following code.
from decouple import config import openai openai.api_key = config("CHATGPT_API_KEY") JOKE_SETUP = "I will give you a subject each time for all the following prompts. You will return to me a joke, but it should not be too long (4 lines at most). Please do not provide an introduction like 'Here's a joke for you' but get straight into the joke." def get_giggle_result(query): try: result = openai.ChatCompletion.create( model="gpt-3.5-turbo", n=1, temperature=0.4, max_tokens=200, messages=[ {"role": "user", "content": JOKE_SETUP}, {"role": "user", "content": query}, ] ) return result['choices'][0]['message']['content'] except openai.error.RateLimitError as error: print(error) return 1
As you can see we now have a function that takes a variable query and then tries to make the API call using the JOKE_SETUP
defined up top and the user query.
If successful the result is returned, going inside the object to extract only the response itself. But if an exception is thrown we catch it and print it to the console, returning a number 1 we use as an exit code.
We can test for this exit code return later to implement our error communication toward the user.
Now that we have a function we can use, let’s head back over to giggle > views.py
to implement it. Changing the code as follows
from django.shortcuts import render from .apis import get_giggle_result def giggle_search(request): context = {} if request.method == 'POST': query = request.POST.get('query') response = get_giggle_result(query) context['query'] = query context['response'] = response return render(request, './giggle/giggle_search.html', context)
First, we added the import for the function we just created from the apis file, using the dot to specify the apis.py
file is located in the same folder.
Then we call our function get_giggle_result
using the query we received and catch the return in a variable called response.
Finally, we pass the response into the context just like we did the query so it gets passed to our template in the context dictionary.
Updating our template
Let’s go to our giggle_search.html
file located in giggle > templates > giggle
and edit only the div
element with the class of ‘results
‘ to become the following.
<div class="results"> {% if query %} <h3>{{ query }}</h3> <p>{{ response }}</p> {% else %} <h3>Search Results</h3> <p>Your search results will appear here.</p> {% endif %} </div>
Remember the mustache syntax allows us to plug in a variable. We passed in these variables ourselves through the context
dictionary. Django takes care of passing these into the template for us through the render function we called in views.py
If there is a query, we display both the query and the response, else we just display the placeholder text as we did in the past. Finally, we have the endif because our template syntax if/else statement needs to be closed.
Now make sure everything is saved and your server is running using the terminal window.
$ python manage.py runserver
And now load your page and try a search query, for instance 'bears'
. And tada! We have a valid joke response. I feel the query should be capitalized on our HTML page though, so let’s go back to our views.py
file and change the line
context['query'] = query
as follows:
context['query'] = query.capitalize()
Notice the Django server will reload for us as soon as we save our changes, serving the newest version automatically. Just make sure to hit that CTRL+S
and you’ll be good! Now if we try again our search query will be capitalized for display even if we input a lowercase query.
Things are so easy using Python!
Have some fun and play around, see if you can create some cool jokes, and I will see you in part 3 soon, where we will actually start saving these amazing jokes to our database from part 1.
- Part 0 π Giggle – Creating a Joke Search Engine from Scratch with ChatGPT (0/6)
- Part 1 π Giggle GPT Joke Search Engine – Basic Setup (1/6)
- Part 2 π Giggle GPT Joke Search Engine – Implementing the ChatGPT API (2/6)
- Part 3 π Giggle GPT Joke Search Engine – Django ORM & Data Saving (3/6)
- Part 4 π Giggle GPT Joke Search Engine – Separate History Route & Delete Items (4/6)
- Part 5 π Giggle GPT Joke Search Engine – Implementing Pagination (5/6)
- Part 6 π Giggle GPT Joke Search Engine – ChatGPT Productivity & LLM Security (6/6)