With the latest gpt-4-turbo model out recently, there is one very helpful feature which came with it: The JSON mode option. Using JSON mode, we are able to predictably get responses back from OpenAI in structured JSON format.
This can help immensely when building APIs using Large Language Models (LLMs). Even though the model can be instructed to return JSON in it's system prompt, previously, there was no guarantee that the model would return valid JSON. With the JSON mode option now, we can specify the required format and the model will return data according to it.
To know more about JSON mode, have a look at the official OpenAI docs: https://platform.openai.com/docs/guides/text-generation/json-mode
Now let's look at some code to see how this works in action:
I am using the Azure OpenAI service to host the gpt-4-turbo model and I am also using the v1.0.0.-beta.12 version of the Azure OpenAI .NET SDK found on NuGet here:
https://www.nuget.org/packages/Azure.AI.OpenAI/1.0.0-beta.12
What is happening in the code is that in the system message, we are instructing the LLM that analyse the text provided by the user and then extract the cities mentioned in this text and return them in the specified JSON format.
Next, we provide the actually text to parse in the user message.
No comments:
Post a Comment