Artificial Intelligence (AI) agents are revolutionizing the way we build software by introducing autonomy, goal-driven reasoning, and adaptable decision-making. Microsoft’s Model Context Protocol (MCP) is a new standard that simplifies how models interact with tools, APIs, memory, and user inputs. In this article, we’ll explore how to build AI agents using the MCP Server in C#, and how to run everything inside Visual Studio Code (VS Code), one of the most powerful, lightweight IDEs available.

This guide will walk you through:

  • What MCP is and why it matters

  • Setting up a C# environment in VS Code

  • Creating a simple AI agent using MCP Server

  • Connecting the agent with tools and memory

  • Running and testing the AI agent in VS Code

Understanding Model Context Protocol (MCP)

MCP is a protocol developed to help AI models maintain context across multiple interactions. Think of it like a conductor for AI agents: it handles how the model talks to its tools, APIs, memory storage, and external agents.

MCP Server lets you:

  • Host an agent loop that waits for model outputs

  • Handle tool invocations (e.g., calling a web service)

  • Manage ephemeral or persistent memory

  • Route and transform context via plugins

It’s particularly useful for building multi-turn agents or autonomous workflows, where continuity of state and context is essential.

Prerequisites

Before diving into code, ensure the following:

  • .NET 8 SDK installed

  • VS Code with the C# Dev Kit

  • Basic knowledge of async/await, JSON, and HTTP APIs

  • Git to clone MCP Server if needed

Let’s start by setting up your C# project in VS Code.

Setting Up a C# MCP Project in VS Code

  1. Create a new console app:

bash
dotnet new console -n McpAgentExample
cd McpAgentExample
  1. Add necessary NuGet packages:

You may need packages like System.Text.Json, HttpClient, and Microsoft.Extensions.Hosting:

bash
dotnet add package Microsoft.Extensions.Hosting
dotnet add package System.Text.Json
  1. Set up your launch.json and tasks.json for debugging in VS Code:

.vscode/launch.json

json
{
"version": "0.2.0",
"configurations": [
{
"name": ".NET Launch",
"type": "coreclr",
"request": "launch",
"preLaunchTask": "build",
"program": "${workspaceFolder}/bin/Debug/net8.0/McpAgentExample.dll",
"args": [],
"cwd": "${workspaceFolder}",
"stopAtEntry": false,
"console": "internalConsole"
}
]
}

.vscode/tasks.json

json
{
"version": "2.0.0",
"tasks": [
{
"label": "build",
"command": "dotnet",
"type": "process",
"args": [
"build",
"${workspaceFolder}/McpAgentExample.csproj"
],
"problemMatcher": "$msCompile"
}
]
}

Creating a Basic AI Agent Host with MCP

Let’s write a minimal MCP agent loop in C# that listens for model output and executes a tool.

Program.cs

csharp
using System.Text.Json;
using System.Text.Json.Serialization;
using System.Net.Http;
using Microsoft.Extensions.Hosting;
class Program
{
static async Task Main(string[] args)
{
var builder = Host.CreateApplicationBuilder(args);
var app = builder.Build();Console.WriteLine(“🧠 MCP Agent Starting…”);var mcpServer = new McpServer();
await mcpServer.StartAsync();await app.RunAsync();
}
}

Create a new file McpServer.cs to handle MCP requests:

McpServer.cs

csharp
using System.Net;
using System.Text;
using System.Text.Json;
public class McpServer
{
private readonly HttpListener _listener = new HttpListener();public McpServer()
{
_listener.Prefixes.Add(“http://localhost:8080/”);
}public async Task StartAsync()
{
_listener.Start();
Console.WriteLine(“🚀 Listening at http://localhost:8080/”);while (true)
{
var context = await _listener.GetContextAsync();
using var reader = new StreamReader(context.Request.InputStream);
var requestBody = await reader.ReadToEndAsync();Console.WriteLine($”📥 Received:\n{requestBody}“);var responseText = await HandleMcpMessage(requestBody);byte[] buffer = Encoding.UTF8.GetBytes(responseText);
context.Response.ContentLength64 = buffer.Length;
await context.Response.OutputStream.WriteAsync(buffer);
context.Response.Close();
}
}

private Task<string> HandleMcpMessage(string json)
{
// For now, return a dummy tool result
var response = new
{
tool_calls = new[]
{
new {
tool_name = “say_hello”,
output = “Hello from C# agent!”
}
}
};

return Task.FromResult(JsonSerializer.Serialize(response));
}
}

Adding a Custom Tool: say_hello

MCP allows tools to be called via structured messages. Let’s define a simple handler for say_hello where a user provides their name and the tool responds with a greeting.

Update HandleMcpMessage in McpServer.cs:

csharp
private Task<string> HandleMcpMessage(string json)
{
try
{
var doc = JsonDocument.Parse(json);
var root = doc.RootElement;
string name = root.GetProperty(“inputs”).GetProperty(“name”).GetString() ?? “unknown”;var response = new
{
tool_calls = new[]
{
new {
tool_name = “say_hello”,
output = $”Hello, {name}! Welcome to MCP in C#.”
}
}
};return Task.FromResult(JsonSerializer.Serialize(response));
}
catch
{
var error = new { error = “Invalid MCP request format.” };
return Task.FromResult(JsonSerializer.Serialize(error));
}
}

Now, send a POST request to the agent:

bash
curl -X POST http://localhost:8080/ -H "Content-Type: application/json" \
-d '{"inputs": { "name": "Mario" }}'

Expected Output:

json
{
"tool_calls": [
{
"tool_name": "say_hello",
"output": "Hello, Mario! Welcome to MCP in C#."
}
]
}

Using GPT or Other LLMs With MCP

You can link your agent with a language model like GPT-4 via the MCP loop. MCP supports multi-round interaction: the model outputs a tool call, the agent executes it, and the model uses the result to continue.

To do this:

  1. Create a wrapper for the model (e.g., call OpenAI API).

  2. Feed model output back into your HandleMcpMessage.

  3. Store history/memory context to maintain conversation state.

You can simulate this flow:

csharp
var history = new List<string>();
history.Add("User: What is the weather in Zagreb?");
history.Add("Agent: Calling weather API...");
// Append model output, tool result, etc.

Adding Memory and Multi-Turn Capabilities

To extend this to support memory, define a simple memory store:

csharp
public class MemoryStore
{
private readonly Dictionary<string, string> _memory = new();
public void Set(string key, string value) => _memory[key] = value;
public string? Get(string key) => _memory.TryGetValue(key, out var val) ? val : null;
}

Integrate it with McpServer to remember values across invocations.

Running the Agent in VS Code

You can now press F5 or run:

bash
dotnet run

This starts the agent, listens on port 8080, and processes MCP tool calls. You can integrate this with OpenAI’s function-calling API or other MCP-compatible clients.

To test quickly, use Postman or curl to POST JSON messages.

Where To Go From Here

Here are ideas to extend your agent:

  • Add tool plugins: Search Google, call APIs, manipulate files

  • Integrate OpenAI API: Use GPT-4 to generate tool invocations

  • Add persistent memory: Use SQLite or Redis to remember facts

  • Secure your server: Add authentication and HTTPS

  • Use Docker: Package and deploy your C# agent anywhere

Conclusion

In this article, we’ve explored how to build a simple yet powerful AI agent in C# using the Model Context Protocol (MCP), all within the comfort and efficiency of Visual Studio Code. While the example focused on creating a lightweight server that receives structured input and executes tool calls, the underlying pattern lays the groundwork for building far more advanced autonomous systems.

At its core, MCP provides a clean separation of concerns: the language model generates structured outputs (like tool calls or memory queries), while the host application (your C# server) executes those tasks and returns the results. This decoupling is what allows MCP to serve as a universal runtime protocol for AI agents, irrespective of the backend model, programming language, or deployment platform.

By implementing MCP in C#, you’re unlocking several key advantages:

  1. High Performance and Type Safety: C# and .NET are mature ecosystems with rich tooling, asynchronous capabilities, and static typing. This makes your agent fast, reliable, and easier to maintain at scale.

  2. Seamless Integration With Enterprise Infrastructure: Many organizations already run backend systems in .NET, so your MCP agent can directly interact with databases, APIs, and internal services without the need for bridges or translators.

  3. Custom Tooling and Extensibility: You can easily define and register custom tools in C#, such as sending emails, querying CRMs, accessing files, or invoking ML models. The modular architecture allows for plug-and-play functionality, enabling you to incrementally enhance your agent’s capabilities over time.

  4. Memory and State Management: Using techniques like in-memory stores, file persistence, or external databases, your agent can evolve from stateless responders to context-aware, long-term memory systems. This is especially important for agents performing multi-turn conversations or multi-step workflows.

  5. Debugging and Monitoring: Visual Studio Code, paired with tools like Postman or Swagger, provides a robust debugging environment for step-by-step inspection of requests, responses, and agent behavior—critical for building trustworthy systems.

  6. Future-Proof Design: As LLM-based applications continue to grow, standards like MCP will become central to cross-model interoperability. By investing in MCP now, you’re positioning your systems to easily swap models (e.g., OpenAI GPT, Azure OpenAI, local models) without rewriting your orchestration layer.

More broadly, what we’re witnessing is a paradigm shift in software development—from imperative programming to declarative agent orchestration. Instead of defining what every function does step-by-step, you define what the agent should accomplish, and the underlying system (powered by an LLM and structured tooling) figures out how to get there. This dramatically reduces complexity and increases developer productivity.

Additionally, MCP fits naturally into DevOps and CI/CD pipelines. C#’s built-in support for containerization, background services, unit testing, and scalable cloud deployment (via Azure Functions, App Services, or Kubernetes) means your MCP agent can move from prototype to production with minimal friction.

If you’re working in domains such as customer service, automation, internal tooling, or even autonomous research agents, MCP offers a blueprint for building systems that reason, act, and improve over time. Combining this with C#’s power gives you a versatile and robust foundation to build anything from small productivity bots to enterprise-grade autonomous applications.

In conclusion, the future of AI agents is composable, interpretable, and language-agnostic—and with MCP, you’re tapping into that future today. By mastering MCP in C#, you’re not just building an agent; you’re building a new kind of software runtime—one where the model becomes the orchestrator, and the host becomes the executor. Whether you’re just experimenting or building mission-critical applications, the skills and architecture explored here will serve as a valuable foundation in the new era of AI-native software development.