Reasoning models are awesome for multi-step problems, but in real apps you also want some visibility into how the model got there—without exposing full chain-of-thought. In Azure OpenAI, the right pattern is to request a reasoning summary via the Responses API and log/print it next to the final answer.
On a high level, we’ll:
-
Deploy or reuse an Azure OpenAI reasoning model deployment
-
Call Azure OpenAI using the v1 base URL (
/openai/v1/) -
Request a reasoning summary with a chosen reasoning effort
-
Print reasoning summary + final answer in a minimal .NET console app
Prereqs
-
Azure OpenAI resource + a deployed reasoning-capable model (e.g. GPT-5 reasoning variants)
-
.NET 8+
-
Latest OpenAI .NET SDK (
OpenAI) that includesResponsesClientandCreateResponseOptions
1) Create (Or Confirm) Your Reasoning Model Deployment
In Azure AI Foundry:
-
Click path: Azure AI Foundry portal → OpenAI → Deployments → + Create deployment
-
Pick a reasoning model and give it a deployment name (example:
gpt-5-mini) -
Keep that deployment name handy (you’ll pass it to the client)
2) Create a Console App + Install the SDK
3) Call the Responses API and Print the Reasoning Summary
This sample is wired to Azure OpenAI’s v1 endpoint and Responses API.
Change these values:
-
AZURE_OPENAI_ENDPOINT(your Azure OpenAI resource endpoint) -
AZURE_OPENAI_API_KEY -
AZURE_OPENAI_DEPLOYMENT(your deployment name, not the base model name)
Why “summary” (not full reasoning): Azure OpenAI’s model behavior is centered
on
reasoning summaries rather
than returning raw
reasoning_content.
4) Minimal working example
Expected output (example):
Troubleshooting
-
404 Not Found: your deployment name is wrong, or the deployment/region doesn’t support Responses API. Start by verifying deployment name in the portal.
-
400 Bad Request: most often you’re not using the v1 base URL (
.../openai/v1/). -
No reasoning summary returned: your deployment might not be a reasoning model, or the model chose not to emit a summary. Confirm model capability and try
ReasoningSummaryVerbosity = Concise/Detailedif available in your SDK/version. -
Compile errors for Responses types: upgrade the OpenAI .NET SDK class names have changed (e.g.,
CreateResponseOptions). -
401 Unauthorized: API key doesn’t match the resource or is missing.
Notes
-
Reasoning summaries are the “sweet spot”: better debugging/telemetry without leaking full internal chain-of-thought. Azure’s docs explicitly separate Azure OpenAI from providers that emit
reasoning_content. -
If you’re building Copilot/agent experiences, this summary is exactly what you’d stash in app logs or a trace store for support cases. Keep the final answer user-facing.
Wrapping up
If you want a clean, production-friendly way to understand what a reasoning model did without capturing the full chain-of-thought, use the Responses API and print/log the reasoning summary next to the final answer.
Hope this helps!
No comments:
Post a Comment