Sunday, 1 March 2026

Build a Custom UI for a Copilot Studio Agent using the Microsoft 365 Agents SDK

Copilot Studio gives you a great “out-of-the-box” chat experience in Teams and Microsoft 365 Copilot. But sometimes you need your own UI: your branding, your layout, your telemetry, and your app’s context. So in this post, let’s wire a Copilot Studio agent into a .NET console “custom UI” using the Microsoft 365 Agents SDK client library. This will help you get started when you want to surface Copilot Studio Agents in your own custom UIs.

Scope note: This is for calling a Copilot Studio agent from your own app UI (not the iframe embed).


On a high level, we’ll:

  • Publish a Copilot Studio agent and copy its Microsoft 365 Agents SDK connection string.

  • Create an Entra app registration with the CopilotStudio.Copilots.Invoke delegated permission.

  • Use MSAL to sign in a user and get a token for the Power Platform API audience.

  • Call the agent from a lightweight console “chat UI” using Microsoft.Agents.CopilotStudio.Client.


Prereqs

  • A published Copilot Studio agent (and access to its settings).

  • .NET SDK installed (any modern LTS is fine).

  • Entra ID permission to create an app registration


1) Publish Your Agent And Copy The Agents SDK Connection String


In Copilot Studio:

  • Open your agent

  • Go to ChannelsWeb app (or Native app)

  • Under Microsoft 365 Agents SDK, copy the connection string.


Note: If your agent uses “Authenticate with Microsoft” or “Authenticate manually”, you’ll see the connection string option (and not the iframe embed code). 

2) Create An Entra App Registration For User Interactive Sign-In

In Azure portal:

  • Microsoft Entra IDApp registrationsNew registration

  • Platform: Public client/native (mobile & desktop)

  • Redirect URI: http://localhost (HTTP, not HTTPS)

Then add the delegated permission:

  • API permissionsAdd a permission

  • APIs my organization uses → search Power Platform API

  • Delegated permissionsCopilotStudioCopilotStudio.Copilots.Invoke


3) Create The “Custom UI” (A Console Chat)

This is the bare minimum idea:

  • sign in the user (MSAL)

  • get a token scoped to the Power Platform API audience (the SDK computes this for you)

  • start a conversation

  • send messages and stream responses back as activities 

Change these values:

  • directConnectUrl = copied from Copilot Studio channel page (Microsoft 365 Agents SDK section).

  • tenantId, clientId = from your Entra app registration.

Minimal Working Example

Create and run:


Troubleshooting

  • 401/403: confirm delegated permission CopilotStudio.Copilots.Invoke is granted (and admin consent if your tenant requires it). 

  • Redirect URI mismatch: make sure the app registration has http://localhost for public client. (The sample uses localhost.) 

  • No response text: Copilot Studio responses arrive as a stream of activities—log the full activity payload if you suspect you’re filtering out the wrong activity types.

  • Power Platform API missing in permissions picker: follow the sample guidance and tenant configuration notes in the repo.

Notes

  • The console app is a clean “backend harness” you can keep as-is, then wrap with an HTTP API for your React front-end to call.

  • If you need quick embed-only experiences, the iframe approach is simpler, but it’s not the same as a true custom UI.

Wrapping up

This pattern keeps your agent authored in Copilot Studio, while your product team owns the end-user experience in a custom UI. The key pieces are: connection info from Copilot Studio, Entra delegated permission, MSAL sign-in, then stream activities through the SDK client.

Hope this helps!

Sunday, 18 January 2026

Get reasoning summaries from Azure OpenAI Reasoning Models using the Responses API (.NET)

Reasoning models are awesome for multi-step problems, but in real apps you also want some visibility into how the model got there—without exposing full chain-of-thought. In Azure OpenAI, the right pattern is to request a reasoning summary via the Responses API and log/print it next to the final answer.


On a high level, we’ll:

  • Deploy or reuse an Azure OpenAI reasoning model deployment

  • Call Azure OpenAI using the v1 base URL (/openai/v1/)

  • Request a reasoning summary with a chosen reasoning effort

  • Print reasoning summary + final answer in a minimal .NET console app

Prereqs

  • Azure OpenAI resource + a deployed reasoning-capable model (e.g. GPT-5 reasoning variants)

  • .NET 8+

  • Latest OpenAI .NET SDK (OpenAI) that includes ResponsesClient and CreateResponseOptions

1) Create (Or Confirm) Your Reasoning Model Deployment

In Azure AI Foundry:

  • Click path: Azure AI Foundry portal → OpenAI → Deployments → + Create deployment

  • Pick a reasoning model and give it a deployment name (example: gpt-5-mini)

  • Keep that deployment name handy (you’ll pass it to the client)


2) Create a Console App + Install the SDK


3) Call the Responses API and Print the Reasoning Summary

This sample is wired to Azure OpenAI’s v1 endpoint and Responses API.

Change these values:

  • AZURE_OPENAI_ENDPOINT (your Azure OpenAI resource endpoint)

  • AZURE_OPENAI_API_KEY

  • AZURE_OPENAI_DEPLOYMENT (your deployment name, not the base model name)

Why “summary” (not full reasoning): Azure OpenAI’s model behavior is centered on reasoning summaries rather than returning raw reasoning_content.

4) Minimal working example

Expected output (example):

 

Troubleshooting

  • 404 Not Found: your deployment name is wrong, or the deployment/region doesn’t support Responses API. Start by verifying deployment name in the portal.

  • 400 Bad Request: most often you’re not using the v1 base URL (.../openai/v1/).

  • No reasoning summary returned: your deployment might not be a reasoning model, or the model chose not to emit a summary. Confirm model capability and try ReasoningSummaryVerbosity = Concise/Detailed if available in your SDK/version.

  • Compile errors for Responses types: upgrade the OpenAI .NET SDK class names have changed (e.g., CreateResponseOptions).

  • 401 Unauthorized: API key doesn’t match the resource or is missing.

Notes

  • Reasoning summaries are the “sweet spot”: better debugging/telemetry without leaking full internal chain-of-thought. Azure’s docs explicitly separate Azure OpenAI from providers that emit reasoning_content.

  • If you’re building Copilot/agent experiences, this summary is exactly what you’d stash in app logs or a trace store for support cases. Keep the final answer user-facing.

Wrapping up

If you want a clean, production-friendly way to understand what a reasoning model did without capturing the full chain-of-thought, use the Responses API and print/log the reasoning summary next to the final answer. 

Hope this helps!