π This article originally appeared on the Finxter Academy for premium members (including course lesson video). Check out the video course here.
All right, welcome back to part 2, where we’re going to be looking at JSON mode and seeds.
β This will allow us to use only a part of function calls. Namely, when the model generates the arguments it wants to use to call the function, it will return these in valid JSON or JavaScript Object Notation.
You saw in the previous part that we parsed these arguments and then passed them into our functions.
So what if we always want a JSON response from ChatGPT? We can now use the new JSON mode to do this.
Why would this be useful? Well, it’s really easy to parse into an object, that we can manipulate with code or feed into some kind of software or API, just like we did in the previous part. This can be really helpful for extracting data from text.
If we ask GPT to generate something in textual form, it’s pretty hard to use the output in our Python code, for example.
Still, if we ask it to output the data in JSON in exactly the way we specify, it’s very easy to parse this into a dictionary and then save the data in a database or manipulate it in some other way.
So, let’s get started with a simple example to see how this works. You’ll then be able to adapt this to your specific use case.
Preparing Some Data
Let’s get something simple to extract data from. Remember the data could also be generated or acquired in some other way, the point here is the output.
Make a file called chapters.py
in a new folder named 2_JSON_mode_and_seed
s like this:
πFINX_OPENAI_UPDATES (root project folder) π1_Parallel_function_calling π2_JSON_mode_and_seeds πchapters.py π.env
Now go and visit https://gutenberg.org/cache/epub/72064/pg72064.txt in your browser.
This will take you to the text version of the book “The book of Scottish story: historical, humorous, legendary, and imaginative”, which is in the public domain (copyright expired), so we can use it for our example.
Copy the entire list of contents (it’s pretty long) all the way from 'The Henpecked Man'
to 'Catching a Tartar'
and paste it into the chapters.py
file.
It should look like this:
table_of_contents = """ CONTENTS. The Henpecked Man, _John Mackay Wilson_ Duncan Campbell, _James Hogg_ ...loads more entries in between... The Fight for the Standard, _James Paterson_ Catching a Tartar, _D. M. Moir_ """
Notice it’s a simple variable named table_of_contents
which is a very long multiline string so we can easily import this later.
The formatting of the table of contents is wonky with underscores, and some have “quotes” around them while others don’t, so this will make an excellent simple example.
JSON Mode
Go ahead and save this chapters.py
file.
Now, create a new file in the 2_JSON_mode_and_seeds
folder called json_mode.py
.
πFINX_OPENAI_UPDATES (root project folder) π1_Parallel_function_calling π2_JSON_mode_and_seeds πchapters.py πjson_mode.py π.env
Inside, let’s get started with our imports:
from decouple import config from openai import OpenAI from chapters import table_of_contents import json import pprint client = OpenAI(api_key=config("OPENAI_API_KEY"))
We have all our basic imports here,
config
,- OpenAI,
- the
table_of_contents
variable we just defined, json
, andpprint
.
We’ll use pprint
, or pretty print to print the output in a nice way. It will print objects like dictionaries to the console in a much more readable manner, as you’ll see later.
We then initialize our client
as before.
Now, let’s start our json_gpt
function:
def json_gpt(query, model="gpt-3.5-turbo-1106", system_message=None): if not system_message: system_message = "You are a JSON generator which outputs JSON objects according to user request"
We’re going to be using 3.5-turbo the new version for this one, don’t worry, we’ll get to 4-turbo very soon!
But for now, it’s simply not needed to get good results and as 3.5 turbo is much cheaper it’s better to use it when 4 is not needed.
π More on pricing details later.
Again make sure you have the 1106 version and not any older one because only the newest GPT3.5 turbo and GPT4 turbo versions support JSON mode.
We define our function and set a default for the model and system message but allow the user to overwrite either. Still, inside the function, define the messages
list:
messages = [ {"role": "system", "content": system_message}, { "role": "user", "content": f"Please return Json for the following as instructed above:\n{query}", }, ]
Note that the user query is preceded by a specific request for JSON output even in the user message.
Even though we will enable JSON mode, we still have to specifically mention the word JSON in the user message.
If we don’t the model may create weird generations which is actually why a failsafe
error will be returned if we forget to include this word in our context.
response = client.chat.completions.create( model=model, messages=messages, response_format={"type": "json_object"}, )
Now we make a pretty normal request to ChatGPT using the new client syntax.
Note we cannot just set the response_format
variable to json_object
, but we have to specifically pass in a dictionary with the key-value pair "type": "json_object"
.
content: str = response.choices[0].message.content content: dict = json.loads(content) print(f"\033[94m {type(content)} \033[0m") pprint.pprint(content) return content
The content is initially in string format even though it represents JSON.
We then convert it to a dictionary so we can work with the data like any other dictionary.
Note that whatever format you want with whatever keynames and values is possible, as we’ll demonstrate later.
We then print the type of content to show that ChatGPT’s output is, in fact, a valid dictionary object (after conversion from JSON) and pretty print it to the console.
Finally, we return the content so we can use it in our code.
The whole function is as follows:
def json_gpt(query, model="gpt-3.5-turbo-1106", system_message=None): if not system_message: system_message = "You are a JSON generator which outputs JSON objects according to user request" messages = [ {"role": "system", "content": system_message}, { "role": "user", "content": f"Please return Json for the following as instructed above:\n{query}", }, ] response = client.chat.completions.create( model=model, messages=messages, response_format={"type": "json_object"}, ) content: str = response.choices[0].message.content content: dict = json.loads(content) print(f"\033[94m {type(content)} \033[0m") pprint.pprint(content) return content
A Simple Test
Let’s start with a very simple test by adding the following print
statement:
json_gpt( "Give me a Json object with the height in cm and age in years of all people in the following text: John is 6 feet tall and 500 months old. Mary is 5 feet tall and 30 years old. Bob is 170cm in length and was born 25 years ago." )
And we can see it does absolutely fine and converts all the ages and heights to the same units just like we requested, even using 3.5-Turbo.
{'people': [{'age_years': 41.67, 'height_cm': 182.88, 'name': 'John'}, {'age_years': 30, 'height_cm': 152.4, 'name': 'Mary'}, {'age_years': 25, 'height_cm': 170, 'name': 'Bob'}]}
This is a valid dictionary that we can straight away manipulate in our code or store in a database without having to do any additional parsing though we could round out the values if we wanted to.
So this can be used for data extraction, even if the values are given in different units or formats, interweaved in a piece of text. Also notice that the pprint
function made it nice and easy to read by lining up the values in the dictionary.
A More Complex Test
Make sure you comment out the print
statement above and then let’s use our table of contents file and give it a very specific output format, so we can basically use GPT as a data parser without having to write a real output parser.
json_gpt( query=table_of_contents, system_message=""" You are a JSON generator which outputs JSON objects according to user request. Please extract the author and title for all lines going all the way from start to end in the following text and return it as a JSON object following the example provided below. Example input: The Lily of Liddisdale, _Professor Wilson_ The Unlucky Present, _Robert Chambers_ The Sutor of Selkirk β_The Odd Volume_,β Example output: {'contents': [ {'author': 'Professor Wilson', 'title': 'The Lily of Liddisdale'}, {'author': 'Robert Chambers', 'title': 'The Unlucky Present'}, {'author': 'The Odd Volume', 'title': 'The Sutor of Selkirk'}, ]} """, )
Note that the only guarantee we get with JSON mode is JSON output, not the specific format!
We still have the responsibility to be very specific to get the output we desire. Providing specific examples like the above is your best friend, as GPT tends to perform much better this way.
Now go ahead and run the file and you should get the following:
gtp3_5 = { "contents": [ {"author": "John Mackay Wilson", "title": "The Henpecked Man"}, {"author": "James Hogg", "title": "Duncan Campbell"}, ... many many more entries in between ... {"author": "James Paterson", "title": "The Fight for the Standard"}, {"author": "D. M. Moir", "title": "Catching a Tartar"}, ] }
Notice that it followed our example perfectly. It also got rid of the pesky extra quotes and underscores that appeared on the entries. This is just 3.5 Turbo, we haven’t even tried GPT-4 Turbo yet!
If you do have something harder to parse, try GPT-4 Turbo, and it will do a better job. But in this case, 3.5 Turbo was more than enough to get the job done.
So yeah, that’s JSON mode, pretty darn cool and useful.
Have ChatGPT extract structured data for you from any text, and return it in an object format that doesn’t require any complex parsing, or even use ChatGPT as a parser without having to write a real parser to account for all the edge cases.
It’s pretty clever at handling even unforeseen edge cases as long as you provide a solid example of the end output you want.
The Seed Parameter
Go ahead and save and close this file and now let’s look at the Seed parameter. Create a new file called seed_param.py
:
πFINX_OPENAI_UPDATES (root project folder) π1_Parallel_function_calling π2_JSON_mode_and_seeds πchapters.py πjson_mode.py πseed_param.py π.env
Now the idea behind seed parameters is of course that they can make some type of random generator predictable, provided you pass in the same seed, like generating the same Minecraft world by copying the seed from a friend.
While ChatGPT can now use a seed
parameter, the very nondeterministic nature of ChatGPT means that it’s not quite a 100% guarantee, but the answers are definitely more similar and predictable than without a seed, so let’s check it out.
Inside the seed_param.py
file go ahead and start with our imports and basic setup:
from decouple import config from openai import OpenAI client = OpenAI(api_key=config("OPENAI_API_KEY"))
This should be fairly familiar by now.
Now let’s code up a very simple printing utility to help us clean our code by cutting out the repetitive stuff:
def consistency_printer(response): response_content = response.choices[0].message.content system_fingerprint = response.system_fingerprint print(f"\033[94m {response_content} \033[0m") print(f"\033[92m {system_fingerprint} \033[0m")
What this function will do is receive the response we get from ChatGPT, extract the message’s content and the system fingerprint, and print them to the console in respectively blue and green colors.
So what is the system fingerprint?
The system fingerprint, as the name implies, identifies the exact backend configuration that the model works with. This system fingerprint will change if you change the request parameters or if OpenAI updates the models in some way behind the screens, which is likely to happen a couple of times per year.
If these fingerprints are the same, therefore, it means that both your configuration and the remote configuration are the same between both requests.
When we make concurrent requests in a moment, you’ll notice this fingerprint is basically always the same, but if you have a model run for months it is likely the backend configuration on OpenAI’s end will change at some point which will affect determinism and therefore the output.
Simply said, as long as the fingerprint and the seed remain the same between calls, the output should be similar or even the same.
Bedtime Stories
So let’s code up a very simple function that outputs something very nondeterministic, like bedtime stories!
def bedtime_stories(query, seed=None, model="gpt-3.5-turbo-1106"): messages = [ { "role": "system", "content": "You make up fun children's stories according to the user request. The stories are only 100 characters long.", }, {"role": "user", "content": query}, ] response = client.chat.completions.create( model=model, messages=messages, seed=seed, temperature=0.7, stop=["\n"], ) consistency_printer(response)
We set up a very simple system message and then pass in the user query in the second message entry.
We call the GPT-3.5 Turbo model, again making sure to use the new 1106 version as older models don’t support the seed
parameter, and we pass in the messages and the seed.
We also set the temperature
to 0.7 and the stop
parameter to a newline character so we don’t get a huge wall of text.
The stop
parameter simply means that the model will stop generating text when it encounters a newline character, limiting the length of the output we need to compare.
Testing the Seed Parameter with Bedtime Stories
Now let’s add a print
statement and run 3 calls without a seed
parameter:
for i in range(3): bedtime_stories( "Tell me a story about a unicorn in space.", )
Go ahead and run it.
Note how the unicorn has a different name in every single story, and the stories are quite different:
Once upon a time, a unicorn named Luna soared through the galaxy, spreading stardust and kindness wherever she went. fp_eeff13170a Once upon a time, a brave unicorn named Stardust soared through the galaxy, spreading magic and joy to all the stars. fp_eeff13170a Once upon a time, a unicorn named Nova flew through space, sprinkling stardust and bringing light to dark corners. fp_eeff13170a
Now change the print
statement like this, and run it again:
for i in range(3): bedtime_stories( "Tell me a story about a unicorn in space.", seed=2424, )
Note that the seed can be an arbitrary number, we chose 2424 at random. If we run this we get:
Once upon a time, a magical unicorn flew through space, sprinkling stardust on planets and making new friends. fp_eeff13170a Once upon a time, a magical unicorn flew through space, sprinkling stardust on planets and making new friends. fp_eeff13170a Once upon a time, a magical unicorn soared through space, sprinkling stardust on planets and granting wishes to lonely stars. fp_eeff13170a
We can see they are not quite the same. The first and second ones are identical but the third is similar but different. If you run this several times you’ll sometimes get 3 of the same outputs, and sometimes they’ll all be different.
This is because the seed
parameter is not a 100% guarantee, but it does make the output more consistent and similar.
You might think that the temperature
setting of 0.7 is the culprit, but this is not the problem. Setting it to 0 does not make much difference in this case.
If we swap out our function’s default 3.5 Turbo model for GPT-4 Turbo (more on GPT-4 Turbo in the next part):
for i in range(3): bedtime_stories( "Tell me a story about a unicorn in space.", seed=2424, model="gpt-4-1106-preview", )
We see a similar story:
Star Unicorn zooms, finds a comet friend. Together, they race across the Milky Way! π¦β¨ππ fp_a24b4d720c Star Unicorn zooms, finds a comet friend. Cosmic races begin! fp_a24b4d720c Star Unicorn zooms, finds a comet friend. Together, they race across the Milky Way! π¦β¨ππ fp_a24b4d720c
Very similar, and the unicorn has the same name, but the last part is different in the middle generation. Just know that the seed parameter provides no guarantees.
π Recommended: DALLΒ·E 3 Trick: Using Seeds to Recreate the Same Image
Fruitclopedia, More Deterministic Questions
So let’s try with something a little more stable, like fruits.
π« Fruits: Where children’s stories can be about literally everything and therefore there is no definition as to what ChatGPT should be outputting, fruits are quite predictable. Asking about a Pineapple is a very concrete question and not open to artistic interpretation as to what the answer should be.
We have a very basic function, just copy this:
def fruit_gpt(query, seed=None, temperature=0.2): messages = [ { "role": "system", "content": "You are the fruitclopedia. Users name a fruit and you give information.", }, {"role": "user", "content": query}, ] response = client.chat.completions.create( model="gpt-3.5-turbo-1106", messages=messages, seed=seed, temperature=temperature, stop=["\n"], ) consistency_printer(response)
It is basically the same but the temperature has been set to 0.2 for this one.
We still use the stop
parameter to limit the output length to one paragraph, so when the model inserts a newline to go to the next paragraph, it will stop generating text as it hits our stop condition.
Testing the Seed Parameter with Fruitclopedia
Running this without a seed:
for i in range(3): fruit_gpt( "Grapefruit.", temperature=0, )
And we can interestingly see that they start quite the same but then diverge:
Grapefruit is a subtropical citrus fruit known for its sour to semi-sweet taste. It is a hybrid of the sweet orange and the pomelo. Grapefruits are rich in vitamins C and A, and they also contain fiber and antioxidants. They are often enjoyed fresh, juiced, or added to salads and desserts. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a Grapefruit is a subtropical citrus fruit known for its sour to semi-sweet taste. It is a hybrid of the sweet orange and the pomelo, and it is typically larger than an orange with a thicker rind. Grapefruits are rich in vitamins C and A, as well as antioxidants. They are often enjoyed fresh, juiced, or added to salads and desserts. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a Grapefruit is a subtropical citrus fruit known for its sour to semi-sweet taste. It is a hybrid of the pomelo and the sweet orange. Grapefruits are rich in vitamins C and A, as well as dietary fiber. They are often enjoyed fresh, juiced, or added to salads and desserts. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a
This is not so much because we set the temperature to 0.2 but more that our question is much more specific.
Tell me a children’s story about a unicorn could have a million answers, all of which are correct. The number of correct answers for basic info about Pineapples is limited.
So let’s try this with a seed, which is where the seed
parameter really shines:
for i in range(3): fruit_gpt( "Grapefruit.", seed=123, temperature=0, )
As you can see below, the answers are now 100% identical!
Grapefruit is a subtropical citrus fruit known for its slightly bitter and sour taste. It is a hybrid of the pomelo and the sweet orange. Grapefruits are rich in vitamins C and A, as well as dietary fiber. They are often enjoyed fresh, juiced, or added to fruit salads. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a Grapefruit is a subtropical citrus fruit known for its slightly bitter and sour taste. It is a hybrid of the pomelo and the sweet orange. Grapefruits are rich in vitamins C and A, as well as dietary fiber. They are often enjoyed fresh, juiced, or added to fruit salads. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a Grapefruit is a subtropical citrus fruit known for its slightly bitter and sour taste. It is a hybrid of the pomelo and the sweet orange. Grapefruits are rich in vitamins C and A, as well as dietary fiber. They are often enjoyed fresh, juiced, or added to fruit salads. There are different varieties of grapefruit, including white, pink, and red, each with its own unique flavor profile. fp_eeff13170a
However, remember that this is not guaranteed 100%! You will see variation if you run this multiple times. If you use this to write tests for your application you should make sure to include the fingerprint, because if OpenAI updates the system configuration on their end, the output will change. Also make multiple calls and pass the test if one of them matches.
So yeah, that’s the seed
parameter.
Pretty reliable but not guaranteed, as long as you ask somewhat focused questions. If you ask something very open-ended it will still be more similar but less effective.
That’s it for part 2. In the next part, we’ll look at GPT-4 Turbo and it’s really exciting new abilities like vision! See you there!
π Take Me Back to the Full Course
π Full Course: OpenAI API Mastery: Innovating with GPT-4 Turbo, Text-to-Speech (TTS), and DALLΒ·E 3