Introduction
In the previous parts of this series, you learned about Microsoft Extensions for AI (MEAI) and Microsoft.Extensions.VectorData. These tools provide a uniform interface for large language models and enable semantic search with RAG patterns. But what if you need an AI that doesn’t just answer questions but actively performs tasks—using tools, remembering context, and coordinating with other agents? That’s where the Microsoft Agent Framework (version 1.0, released April 2026) comes in. This guide walks you through building your first autonomous agent in .NET, from setup to execution. By the end, you’ll have a working agent that can take instructions and decide how to fulfill them.

What You Need
- .NET SDK (8.0 or later)
- An Azure OpenAI resource with a deployed model (e.g.,
gpt-5.4-mini) - Azure CLI or DefaultAzureCredential configured (for authentication)
- A code editor (Visual Studio, VS Code, or similar)
- Basic familiarity with C# and the command line
Step-by-Step Guide
Step 1: Set Up Your Environment
Before writing any code, ensure your system has the prerequisites. Install the .NET SDK from dotnet.microsoft.com if you haven’t already. Open a terminal and run dotnet --version to confirm. Then, set two environment variables for your Azure OpenAI endpoint and deployment name:
set AZURE_OPENAI_ENDPOINT=https://your-resource.openai.azure.com/
set AZURE_OPENAI_DEPLOYMENT_NAME=gpt-5.4-mini
Make sure you have the Azure.Identity package available later—it will be added automatically.
Step 2: Create a New Console Application
In your terminal, create a new console app:
dotnet new console -n FirstAgent
cd FirstAgent
This generates a basic C# project with a Program.cs file. Open it in your editor.
Step 3: Install the Microsoft Agent Framework Package
The framework is available as the Microsoft.Agents.AI NuGet package. Install it with:
dotnet add package Microsoft.Agents.AI
This command also pulls in dependencies like Azure.AI.OpenAI and Azure.Identity.
Step 4: Write the Code for Your First Agent
Replace the default Program.cs content with the following. It creates an agent that tells jokes. Note that the framework builds directly on IChatClient, the same abstraction used by MEAI.
using Azure.AI.OpenAI;
using Azure.Identity;
using Microsoft.Agents.AI;
var endpoint = Environment.GetEnvironmentVariable("AZURE_OPENAI_ENDPOINT")
?? throw new InvalidOperationException("AZURE_OPENAI_ENDPOINT is not set.");
var deploymentName = Environment.GetEnvironmentVariable("AZURE_OPENAI_DEPLOYMENT_NAME")
?? "gpt-5.4-mini";
AIAgent agent = new AzureOpenAIClient(
new Uri(endpoint),
new DefaultAzureCredential())
.GetChatClient(deploymentName)
.AsAIAgent(
instructions: "You are good at telling jokes.",
name: "Joker");
Console.WriteLine(await agent.RunAsync("Tell me a joke about a pirate."));
The key method is .AsAIAgent(), an extension that converts a chat client into an autonomous agent. You provide instructions (system prompt) and a name. The agent then uses the model to decide how to respond, including calling any tools you’ve registered (none yet).

Step 5: Run the Agent
Back in the terminal, execute the application:
dotnet run
If everything is configured correctly, you’ll see the agent’s joke printed in the console. The agent has autonomously generated a humorous response—no explicit conditionals or step-by-step instructions needed.
Step 6: (Optional) Add Tools and Multi-Agent Orchestration
Once you have a simple agent working, you can expand it. The Microsoft Agent Framework supports graph-based orchestration for multi-agent scenarios. You can register tools (like a weather API or a database query) and let the agent choose when to invoke them. For more details, refer to the official documentation or later parts of this series.
Tips & Next Steps
- Start simple: Use a single agent with clear instructions before moving to multi-agent graphs.
- Leverage existing MEAI knowledge: Since the Agent Framework builds on
IChatClient, you can reuse any chat client setup from Part 1. - Use environment variables for sensitive configuration like API endpoints to keep code secure.
- Experiment with instructions: The
instructionsparameter acts as a system prompt—try different personalities and tasks. - Monitor tool usage: When you add tools, log their invocations to understand agent behavior.
- Check the 1.0 release notes for breaking changes if upgrading from earlier versions.
Now that you have your first agent, you can build on this foundation to create intelligent, autonomous systems that go far beyond simple chatbots.