The rapid adoption of Large Language Models (LLMs) in enterprise applications has created a new class of architectural challenges. Developers are no longer only concerned with business logic and data persistence, but also with prompt construction, context management, safety, observability, and governance. Spring AI, as part of the broader Spring ecosystem, introduces Advisors as a powerful abstraction to address these cross-cutting concerns when interacting with LLMs. Interestingly, the conceptual foundation of Spring AI Advisors aligns very closely with Aspect-Oriented Programming (AOP), a paradigm that Spring developers have relied on for decades.
This article provides an in-depth exploration of how Spring AI Advisors work, why they exist, and how classical AOP concepts—such as advice, join points, pointcuts, and weaving—map naturally to LLM interactions. Through detailed coding examples, we will see how Advisors can be used to modularize prompt enrichment, logging, safety checks, retries, and cost controls. The goal is to help you reason about LLM integrations using familiar AOP mental models while building robust, maintainable AI-powered systems.
Understanding the Motivation Behind Spring AI Advisors
When calling an LLM, the apparent simplicity of a single method call often hides substantial complexity. Consider what may need to happen around every model invocation:
- Injecting system prompts or organizational policies
- Adding user context, memory, or conversation history
- Enforcing content moderation or compliance rules
- Logging prompts and responses for observability
- Retrying failed calls or handling rate limits
- Measuring token usage and cost
If these concerns are implemented directly inside each service method, the result is tight coupling, duplication, and fragile code. This is precisely the type of problem AOP was designed to solve. Spring AI Advisors embrace this idea by allowing developers to wrap LLM interactions with reusable, composable behaviors.
What Are Spring AI Advisors?
In Spring AI, an Advisor is a component that intercepts and augments the execution of an LLM call. Advisors are applied to AI clients such as ChatClient or EmbeddingClient and can modify inputs, outputs, or execution flow.
Conceptually, an Advisor:
- Receives a request before it is sent to the model
- Can modify the prompt or metadata
- Can decide whether to proceed, retry, or short-circuit
- Receives the response after the model returns
- Can transform or validate the response
This mirrors the structure of an AOP advice that runs around a method invocation.
Mapping Spring AI Advisors to AOP Concepts
To understand Spring AI Advisors deeply, it helps to map them directly to classical AOP terminology.
- Join Point: The execution of an LLM call (for example, invoking
chatClient.call()) - Advice: The logic executed before, after, or around the LLM call (implemented by an Advisor)
- Pointcut: The selection of which LLM calls an Advisor applies to (configured via client setup rather than expressions)
- Weaving: The process of attaching Advisors to an AI client at runtime
This mapping is not accidental. Spring AI intentionally borrows the AOP mental model to make Advisors intuitive for Spring developers.
Basic Advisor Structure and Lifecycle
A typical Advisor implements an interface that allows it to participate in the request–response lifecycle. Conceptually, it looks similar to an around advice.
public class LoggingAdvisor implements ChatAdvisor {
@Override
public ChatResponse advise(ChatRequest request, ChatAdvisorChain chain) {
System.out.println("Prompt sent to LLM: " + request.getPrompt());
ChatResponse response = chain.next(request);
System.out.println("Response from LLM: " + response.getResult());
return response;
}
}
Here, chain.next(request) is analogous to proceed() in an AOP ProceedingJoinPoint. The Advisor controls when and if the LLM invocation occurs.
Applying Advisors to a Chat Client
Advisors are attached when configuring a client, creating a clear separation between business logic and cross-cutting AI concerns.
ChatClient chatClient = ChatClient.builder(model)
.advisors(new LoggingAdvisor())
.build();
String response = chatClient.call("Explain dependency injection");
From the caller’s perspective, nothing changes. Internally, however, the LLM call is now wrapped with logging behavior.
Prompt Enrichment as an AOP-Style Concern
One of the most common uses of Advisors is prompt enrichment. This is analogous to using an AOP advice to inject contextual data before a method executes.
public class SystemPromptAdvisor implements ChatAdvisor {
@Override
public ChatResponse advise(ChatRequest request, ChatAdvisorChain chain) {
ChatRequest enriched = request.withSystemMessage(
"You are a senior Java architect following clean code principles."
);
return chain.next(enriched);
}
}
This Advisor ensures that every LLM interaction carries a consistent system-level instruction, without duplicating that logic across services.
Safety and Moderation as Cross-Cutting Concerns
Safety checks are a classic example of cross-cutting logic. With Advisors, you can enforce moderation rules both before and after model execution.
public class ModerationAdvisor implements ChatAdvisor {
@Override
public ChatResponse advise(ChatRequest request, ChatAdvisorChain chain) {
if (containsBannedContent(request.getPrompt())) {
throw new IllegalArgumentException("Prompt violates policy");
}
ChatResponse response = chain.next(request);
if (containsBannedContent(response.getResult())) {
throw new IllegalStateException("Response violates policy");
}
return response;
}
}
This is functionally similar to validation aspects used in traditional Spring applications.
Observability and Metrics Using Advisors
Observability is essential when dealing with probabilistic systems like LLMs. Advisors allow consistent logging, tracing, and metrics collection.
public class MetricsAdvisor implements ChatAdvisor {
@Override
public ChatResponse advise(ChatRequest request, ChatAdvisorChain chain) {
long start = System.currentTimeMillis();
ChatResponse response = chain.next(request);
long duration = System.currentTimeMillis() - start;
recordLatency(duration);
recordTokenUsage(response.getUsage());
return response;
}
}
In AOP terms, this is a textbook around advice used for performance monitoring.
Retry and Fault Tolerance Patterns
LLMs can fail due to transient network issues, rate limits, or provider-side errors. Advisors enable reusable retry logic without polluting business code.
public class RetryAdvisor implements ChatAdvisor {
@Override
public ChatResponse advise(ChatRequest request, ChatAdvisorChain chain) {
for (int i = 0; i < 3; i++) {
try {
return chain.next(request);
} catch (RuntimeException ex) {
if (i == 2) throw ex;
}
}
throw new IllegalStateException("Unreachable");
}
}
This mirrors resilience patterns often implemented using AOP and libraries like Spring Retry.
Composing Multiple Advisors
Just as multiple aspects can be applied to a single join point, Spring AI allows multiple Advisors to be composed.
ChatClient chatClient = ChatClient.builder(model)
.advisors(
new SystemPromptAdvisor(),
new ModerationAdvisor(),
new MetricsAdvisor(),
new LoggingAdvisor()
)
.build();
The order of Advisors matters, much like aspect precedence in Spring AOP. Each Advisor wraps the next one in the chain.
Advisor Chains and Deterministic Control Flow
A key difference between classical AOP and Spring AI Advisors is the explicit nature of the chain. Instead of relying on proxy magic, Advisors receive a chain object that makes control flow obvious and testable.
This explicit chaining is particularly valuable in AI systems, where predictability and auditability are critical. Developers can reason clearly about which behaviors run and in what order.
Testing Advisors in Isolation
Because Advisors are simple, composable units, they are easy to test independently.
@Test
void systemPromptIsInjected() {
SystemPromptAdvisor advisor = new SystemPromptAdvisor();
ChatRequest request = new ChatRequest("Hello");
ChatAdvisorChain chain = mockChain();
advisor.advise(request, chain);
verify(chain).next(argThat(r -> r.getSystemMessage().contains("senior Java architect")));
}
This testability is another benefit inherited from AOP principles.
When to Use Advisors vs Traditional AOP
While Advisors resemble AOP, they are purpose-built for AI interactions. Traditional Spring AOP is still useful for service-layer concerns, but Advisors are better suited for:
- Prompt manipulation
- Model-specific metadata
- Token and cost accounting
- AI safety and governance
Using Advisors avoids leaky abstractions and keeps AI logic close to AI clients.
Conclusion
Spring AI Advisors represent a natural evolution of well-established Aspect-Oriented Programming principles into the domain of Large Language Model integration. As LLMs become first-class components in enterprise systems, the need to manage cross-cutting concerns such as prompt enrichment, safety, observability, retries, and governance becomes unavoidable. Attempting to handle these concerns directly in application code leads to duplication, inconsistency, and reduced maintainability.
By modeling LLM calls as join points and encapsulating surrounding behavior in Advisors, Spring AI gives developers a familiar, powerful abstraction. Advisors act as explicit, composable around advices that wrap model interactions in a deterministic and testable way. This approach preserves clean separation of concerns while acknowledging the unique characteristics of AI systems, such as probabilistic outputs and cost sensitivity.
The strong conceptual alignment with AOP is not merely academic. It allows experienced Spring developers to transfer existing knowledge directly into AI-enabled architectures. Patterns that have proven successful for decades—such as centralized logging, validation, metrics, retries, and policy enforcement—can now be applied consistently to LLM interactions.
Perhaps most importantly, Spring AI Advisors encourage disciplined design at a time when experimentation with AI can easily devolve into ad-hoc implementations. By treating AI behavior as an aspect rather than an afterthought, teams can build systems that are not only intelligent, but also observable, secure, and maintainable over the long term. In this sense, Advisors are more than a technical feature—they are a guiding architectural principle for the responsible integration of Large Language Models into modern Spring applications.