In the rapidly evolving world of autonomous agents, LangChain and LangGraph provide powerful abstractions for orchestrating multi-step intelligent behavior using language models. With the rise of LLM-driven workflows, being able to build agents that can search the web, retrieve results via DuckDuckGo, and summarize findings autonomously is becoming critical in research, enterprise, and productivity applications.

This guide walks through the process of building a fully autonomous AI agent using LangChain and LangGraph. The agent will execute a multi-step plan: receive a user query, search the web, summarize results, and return a digestible answer.

Prerequisites

To follow along, make sure you have the following installed:

bash
pip install langchain langgraph duckduckgo-search openai

You’ll also need:

  • An OpenAI API key (or similar LLM provider)

  • Python 3.9+

Understanding LangChain and LangGraph

LangChain simplifies LLM applications by providing abstractions for memory, tools, agents, and chains. However, LangChain alone lacks an easy way to model finite state logic or graph-based workflows.

This is where LangGraph comes in—a graph-based orchestration framework built on top of LangChain. It allows you to:

  • Define a multi-step process using directed graphs

  • Include conditional routing

  • Support concurrent branches

  • Reuse LangChain tools and chains

Together, LangChain and LangGraph form the foundation for powerful autonomous agent workflows.

Define Your Agent’s Purpose and Tools

We want our agent to:

  1. Accept a user query.

  2. Search the web using DuckDuckGo.

  3. Summarize the top results.

  4. Return the final answer.

Let’s start by defining the tools.

DuckDuckGo Search Tool

python

from duckduckgo_search import DDGS

def duckduckgo_search_tool(input_text):
with DDGS() as ddgs:
results = ddgs.text(input_text, max_results=5)
return “\n”.join([f”{r[‘title’]}: {r[‘body’]}” for r in results])

Summarization Tool

We’ll use OpenAI’s gpt-4 or gpt-3.5-turbo to summarize the search results.

python
from langchain.chat_models import ChatOpenAI
from langchain.prompts import PromptTemplate
from langchain.chains import LLMChain
llm = ChatOpenAI(temperature=0.3, model=“gpt-4”)summary_prompt = PromptTemplate.from_template(“””
Summarize the following web search results for the question: “{question}”.
Results:
{results}Summary:
“””)summary_chain = LLMChain(llm=llm, prompt=summary_prompt)

Define LangGraph Nodes

Each node in LangGraph is a function that takes input and produces output.

Here’s a simplified structure:

python
# Node: Initial input
def receive_input_node(state):
return {"question": state["question"]}
# Node: Web search
def search_node(state):
question = state[“question”]
results = duckduckgo_search_tool(question)
return {“question”: question, “search_results”: results}# Node: Summarization
def summarize_node(state):
summary = summary_chain.run(question=state[“question”], results=state[“search_results”])
return {“question”: state[“question”], “summary”: summary}

Build The LangGraph Workflow

We’ll now define the LangGraph DAG (Directed Acyclic Graph).

python

from langgraph.graph import StateGraph, END

# Define custom state
class AgentState(dict):
question: str
search_results: str
summary: str

# Initialize graph
workflow = StateGraph(AgentState)

# Add nodes
workflow.add_node(“receive_input”, receive_input_node)
workflow.add_node(“web_search”, search_node)
workflow.add_node(“summarize”, summarize_node)

# Define edges
workflow.set_entry_point(“receive_input”)
workflow.add_edge(“receive_input”, “web_search”)
workflow.add_edge(“web_search”, “summarize”)
workflow.add_edge(“summarize”, END)

# Compile
app = workflow.compile()

Run The Autonomous Agent

Let’s try it with a real-world question.

python
input_state = {"question": "What are the latest advancements in quantum computing in 2025?"}
final_state = app.invoke(input_state)
print(“🧠 Final Summary:\n”, final_state[“summary”])

You now have an autonomous agent that can:

  • Take any user query

  • Search the web in real time

  • Summarize the results using LLMs

  • Return intelligent insights

Adding Conditional Logic (Optional)

You can improve your agent by adding conditional nodes—like skipping search if the input already contains known facts.

python
def should_search(state):
return "web_search" if len(state.get("question", "")) > 10 else END
workflow.add_conditional_edges(“receive_input”, should_search)

This turns the static flow into a dynamic decision tree, enabling smarter routing based on context.

Making the Agent Re-entrant (Optional for Iterative Planning)

Want to build a ReAct-style agent that rethinks steps?

LangGraph supports loops via “reentrant” nodes. Here’s an example for adding an iteration loop:

python
workflow.add_edge("summarize", "web_search") # Loop back for clarification

You can conditionally reroute based on summarization confidence or LLM feedback.

Bonus: Packaging into a Class

You can wrap your agent into a class for easy reuse:

python
class WebResearchAgent:
def __init__(self):
self.app = app
def run(self, query):
return self.app.invoke({“question”: query})

Then use:

python
agent = WebResearchAgent()
summary = agent.run("Explain the economic implications of AI in Africa")
print(summary["summary"])

Considerations for Production

If you’re deploying this agent in production, consider:

  • Rate limiting and retries for DuckDuckGo API

  • Caching previously seen queries to avoid redundant computation

  • Output formatting using Markdown or HTML

  • Logging and observability using tools like OpenTelemetry

You can even plug it into FastAPI or LangServe for a REST interface.

Extending Your Agent: From Summary To Insights

Here are some ways to make your autonomous agent even more powerful:

Extension Tool Purpose
Named Entity Recognition spaCy Extract people, places, companies
Source Attribution LangChain RAG Attach sources to summaries
Sentiment Analysis OpenAI or HuggingFace Detect tone or bias
Image Search DuckDuckGo images Visual result summarization

Autonomous AI agents are no longer science fiction—they’re here and available thanks to LangChain and LangGraph. In this guide, we demonstrated how to:

  • Build a purpose-driven agent with web search and summarization skills

  • Use DuckDuckGo for fast and free web results

  • Summarize information with OpenAI’s GPT models

  • Orchestrate everything using LangGraph’s powerful state and DAG-based logic

The architecture is modular, adaptable, and production-ready. By integrating LLMs with external tools and crafting structured workflows, developers can build agents that mimic research analysts, productivity assistants, or even investigative reporters.

In a world overflowing with data, the true power lies in agents that can autonomously seek, understand, and synthesize information—exactly what you now know how to build.