Artificial intelligence agents have rapidly evolved from experimental prototypes to production-ready components that drive automation, decision-making, and high-value enterprise workloads. Whether it’s orchestrating tasks, integrating with APIs, or reasoning over business logic, modern AI agents deliver predictable, repeatable outcomes that scale.

Microsoft’s Semantic Kernel (SK) is one of the most powerful frameworks for building AI agents in .NET, Python, or Java. It provides a robust orchestration layer for Large Language Models (LLMs), enabling developers to unify memory, planning, connectors, functions, and plugins into a cohesive, production-ready agent architecture.

In this article, we will walk through how to build a simple but production-ready AI agent using Semantic Kernel, including:

  • What Semantic Kernel is and why it matters

  • Designing an agent workflow

  • Using Semantic Kernel to build functions and skills

  • Adding memory and state persistence

  • Implementing a planner

  • Practical coding examples (C# and Python)

  • Best practices for deploying and scaling an SK-based agent

Let’s dive in.

Understanding What Semantic Kernel Really Provides

Before building the agent, it’s essential to understand what makes SK different from simply calling an LLM API.

Semantic Kernel acts as:

A unifying orchestration layer

It connects LLMs with:

  • Traditional code functions

  • Plugins

  • External APIs

  • Embedding memories

  • Planning systems

A function-based architecture

Everything in SK is a “function” — whether written in code, prompt-based, or dynamically composed. This results in:

  • Modular

  • Testable

  • Reusable

  • Production-friendly
    building blocks.

A planning and reasoning engine

SK includes planners that can:

  • Interpret a natural-language instruction

  • Break it into executable steps

  • Choose appropriate functions

  • Execute them and return results

These capabilities make SK ideal for building production-grade agents.

Defining the Agent We Will Build

To keep the example tractable but useful, we’ll build an agent that can:

  1. Receive a natural-language request from a user

  2. Retrieve relevant memory

  3. Choose appropriate steps (via planner)

  4. Execute functions such as:

    • Summarizing text

    • Calling a mock external API

    • Storing or retrieving memory

  5. Return a formatted, reliable answer

This structure is similar to real production AI agents used in automation, customer support, and internal business workflows.

Setting Up Semantic Kernel

Semantic Kernel supports both C# and Python. Below are fully working examples for both.

Installing Semantic Kernel

C#

dotnet add package Microsoft.SemanticKernel

Python

pip install semantic-kernel

Building the Core Agent

Defining Kernel and AI Model Configuration

C# Example

using Microsoft.SemanticKernel;
using Microsoft.SemanticKernel.Connectors.OpenAI;
var builder = Kernel.CreateBuilder();builder.AddOpenAIChatCompletion(
modelId: “gpt-4o-mini”,
apiKey: Environment.GetEnvironmentVariable(“OPENAI_API_KEY”)
);var kernel = builder.Build();

Python Example

from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
kernel = Kernel()kernel.add_service(
OpenAIChatCompletion(
“gpt-4o-mini”,
api_key=os.getenv(“OPENAI_API_KEY”)
)
)

At this point, you have a kernel that can communicate with an LLM — but the real power comes from functions.

Creating Agent Functions (Skills)

Semantic Kernel uses skills (collections of functions) to modularize your agent’s capabilities.

Creating a Semantic Function (Prompt Function)

C# Prompt Function

var summarize = kernel.CreateFunctionFromPrompt(
"Summarize the following text in one concise paragraph:\n{{$input}}",
functionName: "SummarizeText"
);

Python Prompt Function

summarize = kernel.create_semantic_function(
"Summarize the following text in one concise paragraph:\n{{$input}}",
function_name="SummarizeText"
)

This creates a clean, modular function the agent can call.

Creating a Native Function (Code Function)

C# Example

public class UtilitySkill
{
[KernelFunction("GetCurrentTimestamp")]
public string GetCurrentTimestamp()
{
return DateTime.UtcNow.ToString("O");
}
}
kernel.ImportSkill(new UtilitySkill(), “utils”);

Python Example

class UtilitySkill:
def get_current_timestamp(self, *args):
return datetime.utcnow().isoformat()
kernel.import_skill(UtilitySkill(), skill_name=“utils”)

Native functions are extremely useful for API calls, database operations, or system integrations.

Adding Memory to the Agent

Production agents need memory so they can maintain context over time, remember facts, and retrieve relevant information later.

Semantic Kernel supports embeddings-based memory storage.

Setting Up Memory

C# Example

builder.WithMemoryStorage(new VolatileMemoryStore());
kernel = builder.Build();

Python Example

from semantic_kernel.memory import VolatileMemoryStore
kernel.add_memory_store(VolatileMemoryStore())

Storing Memory

C#

await kernel.Memory.SaveInformationAsync(
collection: "user-notes",
text: "User likes automation and workflow optimization",
id: "note1"
);

Python

await kernel.memory.save_information(
"user-notes",
"User likes automation and workflow optimization",
"note1"
)

Retrieving Memory

C#

var results = kernel.Memory.SearchAsync("automation", "user-notes");

Python

results = kernel.memory.search("automation", "user-notes")

Your agent can now recall important information automatically.

Implementing a Planner

The planner is where your agent becomes autonomous.

Planners read natural-language requests and choose the right sequence of functions.

C# Planner Example

using Microsoft.SemanticKernel.Planning.Stepwise;

var planner = new StepwisePlanner(kernel);
var plan = await planner.CreatePlanAsync(“Summarize the document then give me the timestamp.”);

var result = await plan.InvokeAsync(kernel);
Console.WriteLine(result.ToString());

Python Planner Example

from semantic_kernel.planning.stepwise_planner import StepwisePlanner

planner = StepwisePlanner(kernel)
plan = await planner.create_plan(“Summarize the document then give me the timestamp.”)

result = await plan.invoke(kernel)
print(result)

With this, your agent can autonomously chain functions.

Putting It All Together: A Complete Mini-Agent

Here is a working C# version combining summarization, memory, planning, and utilities.

Full C# Mini-Agent

var builder = Kernel.CreateBuilder();

builder.AddOpenAIChatCompletion(“gpt-4o-mini”, Environment.GetEnvironmentVariable(“OPENAI_API_KEY”));
builder.WithMemoryStorage(new VolatileMemoryStore());

var kernel = builder.Build();

// Skills
var summarize = kernel.CreateFunctionFromPrompt(
“Summarize this in one paragraph:\n{{$input}}”,
“SummarizeText”
);

public class UtilitySkill
{
[KernelFunction(“GetCurrentTimestamp”)]
public string GetCurrentTimestamp() => DateTime.UtcNow.ToString(“O”);
}

kernel.ImportSkill(new UtilitySkill(), “utils”);

// Planner
var planner = new StepwisePlanner(kernel);

string userRequest = @”
Summarize the following text and tell me the timestamp:
Semantic Kernel helps build AI agents using functions and skills.
;

var plan = await planner.CreatePlanAsync(userRequest);
var result = await plan.InvokeAsync(kernel);

Console.WriteLine(result);

Full Python Mini-Agent

import os
import asyncio
from datetime import datetime
from semantic_kernel import Kernel
from semantic_kernel.connectors.ai.open_ai import OpenAIChatCompletion
from semantic_kernel.memory import VolatileMemoryStore
from semantic_kernel.planning.stepwise_planner import StepwisePlanner
async def main():
kernel = Kernel()
kernel.add_service(OpenAIChatCompletion(“gpt-4o-mini”, api_key=os.getenv(“OPENAI_API_KEY”)))
kernel.add_memory_store(VolatileMemoryStore())summarize = kernel.create_semantic_function(
“Summarize this in one paragraph:\n{{$input}}”,
function_name=“SummarizeText”
)class UtilitySkill:
def get_current_timestamp(self):
return datetime.utcnow().isoformat()kernel.import_skill(UtilitySkill(), “utils”)planner = StepwisePlanner(kernel)user_request = “””
Summarize the following text and tell me the timestamp:
Semantic Kernel helps build AI agents using functions and skills.
“””
plan = await planner.create_plan(user_request)
result = await plan.invoke(kernel)print(result)

asyncio.run(main())

This is a fully functional agent capable of planning, summarizing, integrating code functions, and maintaining memory.

Production Considerations When Deploying Semantic Kernel Agents

Building an agent is only half the story — getting it production-ready requires additional practices.

Error Handling

You should implement retries, model fallbacks, and safe exception wrapping around:

  • API calls

  • Function execution

  • Planner failures

  • Memory access

Observability

Include logging for:

  • Function calls

  • Inputs/outputs

  • Planner steps

  • Errors

Tools like Application Insights or OpenTelemetry work well with SK.

Security

Key measures include:

  • Managed identities

  • Secure storage for keys

  • Output filtering

  • Content moderation

Scalability

Scale the agent by:

  • Using serverless architectures

  • Separating memory stores into Redis or Azure Cognitive Search

  • Running multiple kernel instances

Responsible AI Controls

Ensure:

  • Human oversight

  • Rate limiting

  • Guardrails in prompt templates

Conclusion

Building a production-ready AI agent may seem complex, but Semantic Kernel dramatically simplifies the architecture through its powerful function-centric design. By blending semantic (LLM-powered) functions, native code functions, memory, and planning, SK enables developers to build agents that are modular, testable, and enterprise-grade.

In this article, we explored how Semantic Kernel transforms a simple prompt into an orchestrated system capable of reasoning, planning, retrieving information, and executing code. You learned how to configure the kernel, build prompt and native functions, integrate memory, use planners, and compose these elements into a fully functional AI agent.

The example agent we built is intentionally simple, but it reflects the real patterns behind more advanced production agents used for workflow automation, customer support, document intelligence, DevOps operations, and business process orchestration. By adding additional skills, connecting APIs, or using persistent storage, you can scale the agent into a powerful automation engine.

Semantic Kernel continues to evolve rapidly, giving developers a modern, flexible, and extensible platform for intelligent applications. Whether you are prototyping or designing mission-critical solutions, SK provides the foundation for reliable, production-ready AI agents that combine reasoning with practical execution.