Getting back valid json object returned as a response from any LLM models api has always been a challenge.
No matter how many times you add explicitly add in your prompt , "Only return the response in a json object and nothing else" etc and call that api endpoint 10 times, it is mostly likely 3 out of them will fail to return a valid json object and there goes the JSON.parse error.
So it was a great news when when OpenAI shared that you can now get back a valid json object response from their api (atleast some models) endpoints.
I have been testing this with GPT4o and it works like a charm. Here is very simple example on how you can get back a valid json object response from OpenAI GPT4o model.
const completion = await openai.chat.completions.create({
model: "gpt-4o",
messages: [
{
role: "user",
content: "Say this is a test , return it in a json object",
},
],
response_format: { type: "json_object" },
});
console.log(completion.choices[0]?.message?.content);
} catch (error) {
console.error(error);
}
Response :
{
"message": "This is a test."
}
Make sure you are using the latest version of OpenAI api client.
Make sure you are using the correct model name.
And most importantly, make sure you explicitly mention in your prompt that you want the response in a json object. Otherwise it will fail with following error :
BadRequestError: 400 'messages' must contain the word 'json' in some form, to use 'response_format' of type 'json_object'.
So following prompt doesn't works and throws above error:
content: "Say this is a test",
Following prompt works & returns a valid json object:
content: "Say this is a test , return it in a json object",
Hope this helps!
Back To All Blogs