Step 1 of 11Introduction to LangGraph
LangGraph is a framework for building stateful, multi-actor applications powered by large language models. Developed by LangChain Inc., it lets you model agent workflows as graphs rather than linear chains, enabling cycles, branching, and complex orchestration patterns that traditional sequential pipelines simply cannot express.
Why LangGraph Exists
As LLM-powered applications grew more complex, developers hit fundamental limitations with chain-based architectures. Consider a research assistant that needs to:
- Search the web for information
- Evaluate whether the results are sufficient
- Decide to search again or generate a final answer
- Optionally ask a human for clarification
This workflow has loops, conditional branches, and state that persists across iterations. A simple chain cannot express this naturally. LangGraph solves this by modeling the workflow as a directed graph.
LangGraph vs. LangChain
LangGraph is not a replacement for LangChain. They serve different purposes and work together.
| Feature | LangChain | LangGraph |
|---|---|---|
| Architecture | Linear chains & pipelines | Directed graphs with cycles |
| State | Passed through chain | Centralized, persistent state |
| Loops | Not natively supported | First-class support |
| Best for | Simple pipelines, RAG | Agents, multi-step workflows |
| Human-in-the-loop | Manual implementation | Built-in support |
| Persistence | Not built-in | Checkpointing built-in |
Core Philosophy: Agents as Graphs
In LangGraph, every agent workflow is modeled using three primitives:
- State — A shared data structure that flows through the entire graph and holds all relevant information
- Nodes — Python functions that read state, perform some action (call an LLM, run a tool, transform data), and return updated state
- Edges — Connections between nodes that control execution flow, including conditional edges that create decision points and cycles
Figure 1: A typical LangGraph agent workflow with a cycle between the Agent and Tools nodes.
When to Use LangGraph
- Building ReAct-style agents that reason and call tools in a loop
- Multi-agent systems where multiple LLM agents collaborate
- Workflows requiring human-in-the-loop approval steps
- Applications that need state persistence across conversations
- Complex pipelines with conditional branching and error handling
Step 2 of 11Installation & Setup
Before we start building graphs, let's set up our development environment with all the necessary packages and configuration.
Installing Required Packages
LangGraph works alongside the LangChain ecosystem. Install the core packages:
# Install core LangGraph and LangChain packages
pip install langgraph langchain-openai langchain-core
# Optional: additional tools and utilities
pip install langchain-community tavily-python
# Verify installation
python -c "import langgraph; print(f'LangGraph version: {langgraph.__version__}')"
Setting Up API Keys
LangGraph agents typically use LLMs that require API keys. We will use OpenAI in this tutorial, but you can swap in any LangChain-compatible model.
import os
# Option 1: Set environment variables directly
os.environ["OPENAI_API_KEY"] = "sk-your-key-here"
# Option 2 (recommended): Use a .env file with python-dotenv
# pip install python-dotenv
from dotenv import load_dotenv
load_dotenv() # reads from .env file in project root
# Option 3: Tavily for web search (used in later steps)
os.environ["TAVILY_API_KEY"] = "tvly-your-key-here"
.env file that is listed in your .gitignore.Project Structure
Here is the recommended project structure for a LangGraph application:
langgraph-tutorial/
├── .env # API keys (add to .gitignore!)
├── requirements.txt # Package dependencies
├── simple_chatbot.py # Step 6: First graph
├── tool_agent.py # Step 7: Tool integration
├── react_agent.py # Step 8: ReAct pattern
├── checkpoint_agent.py # Step 9: Memory & human-in-the-loop
└── multi_agent.py # Step 10: Multi-agent system
Basic Imports
These are the imports you will use throughout this tutorial. You don't need to memorize them all now; we'll introduce each one when it becomes relevant.
# --- Graph construction ---
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
# --- Prebuilt components ---
from langgraph.prebuilt import ToolNode, tools_condition
# --- Checkpointing ---
from langgraph.checkpoint.memory import MemorySaver
# --- LLM provider ---
from langchain_openai import ChatOpenAI
# --- Message types ---
from langchain_core.messages import (
HumanMessage,
AIMessage,
SystemMessage,
)
# --- Tool decorator ---
from langchain_core.tools import tool
# --- Python standard library ---
from typing import TypedDict, Annotated, Literal
import operator
Verify Everything Works
Run this quick sanity check to make sure your environment is properly configured:
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# Test that the LLM is accessible
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
response = llm.invoke([HumanMessage(content="Say 'LangGraph is ready!' in one sentence.")])
print(response.content)
# Expected output: LangGraph is ready!
Step 3 of 11Core Concepts – State
State is the central nervous system of every LangGraph application. It is a shared data structure that flows through every node in the graph, carrying all the information the agent needs to make decisions.
How State Works
In LangGraph, state is defined as a Python TypedDict. Every node in the graph receives the full state as input and returns a partial update (only the keys that changed). LangGraph automatically merges these partial updates into the overall state.
Defining State with TypedDict
The simplest way to define state is with Python's TypedDict:
from typing import TypedDict
class AgentState(TypedDict):
"""State that flows through the entire graph."""
query: str # The user's original question
context: str # Retrieved information
answer: str # The generated answer
iteration: int # How many times we've looped
When a node returns {"answer": "Paris"}, LangGraph replaces only the answer key in the state. All other keys remain unchanged.
MessagesState (Built-in)
For chatbot-style applications, LangGraph provides a convenient built-in state type called MessagesState. It comes with a single key, messages, that automatically appends new messages rather than replacing them.
from langgraph.graph.message import MessagesState
# MessagesState is equivalent to:
# class MessagesState(TypedDict):
# messages: Annotated[list[AnyMessage], add_messages]
#
# The add_messages reducer automatically appends new messages
# to the existing list instead of replacing it.
# Usage: your nodes just return {"messages": [new_message]}
# and it gets appended to the conversation history.
MessagesState handles this automatically using a reducer function.Custom State with Annotated Types and Reducers
For more complex scenarios, you can use Annotated types to define reducer functions that control how state updates are merged. A reducer takes the current value and the update, and returns the new value.
from typing import TypedDict, Annotated
from langchain_core.messages import AnyMessage
from langgraph.graph.message import add_messages
import operator
class ResearchState(TypedDict):
"""Custom state with multiple reducer strategies."""
# Messages use add_messages reducer: new messages are APPENDED
messages: Annotated[list[AnyMessage], add_messages]
# Sources use operator.add: new lists are CONCATENATED
sources: Annotated[list[str], operator.add]
# Simple fields (no reducer): values are REPLACED
query: str
final_answer: str
# Counter with a custom reducer: values are ADDED
search_count: Annotated[int, operator.add]
Reducer Behavior Summary
| Reducer | Behavior | Example |
|---|---|---|
None (default) |
Replace old value | "hello" → "world" = "world" |
add_messages |
Append messages intelligently | [msg1] + [msg2] = [msg1, msg2] |
operator.add |
Concatenate lists or add numbers | [a, b] + [c] = [a, b, c] |
Complete State Example
Here is a practical example showing how state flows through a simple two-node graph:
from typing import TypedDict, Annotated
import operator
class CounterState(TypedDict):
value: int # replaced each time
history: Annotated[list[str], operator.add] # appended each time
def double_it(state: CounterState) -> dict:
"""Node that doubles the value."""
new_val = state["value"] * 2
return {
"value": new_val,
"history": [f"doubled to {new_val}"]
}
def add_ten(state: CounterState) -> dict:
"""Node that adds ten."""
new_val = state["value"] + 10
return {
"value": new_val,
"history": [f"added 10 to get {new_val}"]
}
# Initial state: {"value": 5, "history": []}
# After double_it: {"value": 10, "history": ["doubled to 10"]}
# After add_ten: {"value": 20, "history": ["doubled to 10", "added 10 to get 20"]}
Step 4 of 11Core Concepts – Nodes
Nodes are the workhorses of a LangGraph graph. Each node is simply a Python function that takes the current state as input, performs some operation, and returns a partial state update.
Node Signature
Every node function follows this pattern:
def my_node(state: MyState) -> dict:
"""
Args:
state: The FULL current state of the graph.
Returns:
A dict with ONLY the keys you want to update.
Keys not included remain unchanged.
"""
# Read from state
current_value = state["some_key"]
# Do something (call LLM, run tool, transform data)
new_value = process(current_value)
# Return partial state update
return {"some_key": new_value}
Chatbot Node
The most common node type calls an LLM and returns its response as a message:
from langgraph.graph.message import MessagesState
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def chatbot_node(state: MessagesState) -> dict:
"""Call the LLM with the full conversation history."""
# state["messages"] contains all messages so far
response = llm.invoke(state["messages"])
# Return the AI response; add_messages reducer appends it
return {"messages": [response]}
Tool Execution Node
A node that executes tools based on the LLM's tool call decisions:
from langchain_core.messages import ToolMessage
import json
def tool_executor_node(state: MessagesState) -> dict:
"""Execute tools that the LLM requested."""
last_message = state["messages"][-1]
results = []
for tool_call in last_message.tool_calls:
# Look up the tool by name and invoke it
tool_fn = tool_registry[tool_call["name"]]
result = tool_fn.invoke(tool_call["args"])
# Wrap the result in a ToolMessage
results.append(
ToolMessage(
content=str(result),
tool_call_id=tool_call["id"],
)
)
return {"messages": results}
ToolNode that handles all of this automatically (covered in Step 7).Data Processing Node
Nodes are not limited to LLM calls. They can perform any computation:
from typing import TypedDict, Annotated
import operator
class AnalysisState(TypedDict):
raw_text: str
word_count: int
keywords: Annotated[list[str], operator.add]
summary: str
def analyze_text(state: AnalysisState) -> dict:
"""Node that analyzes text without calling an LLM."""
text = state["raw_text"]
# Pure Python processing
words = text.split()
common_words = {"the", "a", "an", "is", "are", "was", "were"}
keywords = [w.lower() for w in words if w.lower() not in common_words and len(w) > 3]
return {
"word_count": len(words),
"keywords": keywords[:10], # top 10 keywords
}
def summarize_text(state: AnalysisState) -> dict:
"""Node that uses an LLM to summarize."""
llm = ChatOpenAI(model="gpt-4o-mini")
response = llm.invoke(
f"Summarize this in one sentence: {state['raw_text']}"
)
return {"summary": response.content}
Adding Nodes to a Graph
Once you define your node functions, you register them with the StateGraph:
from langgraph.graph import StateGraph
# Create the graph with your state type
graph = StateGraph(AnalysisState)
# Add nodes - the string is the node's name in the graph
graph.add_node("analyze", analyze_text)
graph.add_node("summarize", summarize_text)
# Node names are used when defining edges (next step!)
Step 5 of 11Core Concepts – Edges
Edges define the flow of execution between nodes. They determine which node runs next after the current node finishes. LangGraph supports three types of edges: normal edges, conditional edges, and entry/finish points.
Normal Edges
A normal edge creates a direct, unconditional connection between two nodes. When node A finishes, node B always runs next.
from langgraph.graph import StateGraph, START, END
graph = StateGraph(MyState)
graph.add_node("fetch_data", fetch_data)
graph.add_node("process_data", process_data)
graph.add_node("generate_report", generate_report)
# Normal edges: linear flow
graph.add_edge(START, "fetch_data") # Entry point
graph.add_edge("fetch_data", "process_data") # fetch -> process
graph.add_edge("process_data", "generate_report") # process -> report
graph.add_edge("generate_report", END) # Exit point
START and END Special Nodes
LangGraph provides two special sentinel nodes:
START— The entry point of the graph. You must define an edge from START to your first node.END— The terminal node. When execution reaches END, the graph stops and returns the final state.
START and END are not real nodes you define. They are constants imported from langgraph.graph that serve as markers for the graph's entry and exit points.Conditional Edges
Conditional edges are the most powerful feature of LangGraph. They let you route execution based on the current state, enabling branching and loops.
from typing import Literal
def route_after_agent(state: MyState) -> Literal["tools", "end"]:
"""Decide what happens after the agent node runs.
This function inspects the state and returns a STRING
that maps to the next node name.
"""
last_message = state["messages"][-1]
# If the LLM made tool calls, go to the tools node
if last_message.tool_calls:
return "tools"
# Otherwise, we're done
return "end"
# Register the conditional edge
graph.add_conditional_edges(
source="agent", # After this node finishes...
path=route_after_agent, # ...run this function to decide next node
path_map={ # Map return values to actual node names
"tools": "tool_node",
"end": END,
},
)
Creating Cycles with Conditional Edges
By routing back to a previous node, you create a cycle. This is how LangGraph implements agent loops:
# This creates the classic ReAct agent loop:
# START -> agent -> (tools -> agent -> tools -> ...) -> END
graph.add_edge(START, "agent")
# Conditional: agent decides if we need tools or are done
graph.add_conditional_edges("agent", route_after_agent, {
"tools": "tool_node",
"end": END,
})
# After tools run, ALWAYS go back to the agent
graph.add_edge("tool_node", "agent") # This creates the cycle!
Figure 2: Conditional edges create cycles -- the Agent-Tools loop runs until the agent decides it is done.
Edge Types Summary
| Edge Type | Method | Use Case |
|---|---|---|
| Normal | add_edge(A, B) |
Fixed sequential flow |
| Conditional | add_conditional_edges(A, fn, map) |
Dynamic routing, loops |
| Entry | add_edge(START, A) |
Graph starting point |
| Finish | add_edge(A, END) |
Graph termination |
Step 6 of 11Building Your First Graph
Now that you understand State, Nodes, and Edges, let's put them all together and build a complete, runnable chatbot using LangGraph. This is the "Hello World" of graph-based agents.
What We Are Building
A simple chatbot that:
- Takes a user message
- Sends it to an LLM (GPT-4o-mini)
- Returns the response
This is intentionally simple so you can see the full LangGraph lifecycle: define state, create nodes, add edges, compile, and run.
Complete Working Code
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
# ── Step 1: Initialize the LLM ─────────────────────────────
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# ── Step 2: Define the node function ───────────────────────
def chatbot(state: MessagesState) -> dict:
"""The only node in our graph: call the LLM."""
response = llm.invoke(state["messages"])
return {"messages": [response]}
# ── Step 3: Build the graph ────────────────────────────────
# Create a StateGraph using the built-in MessagesState
graph_builder = StateGraph(MessagesState)
# Add our chatbot node
graph_builder.add_node("chatbot", chatbot)
# Define the flow: START -> chatbot -> END
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
# ── Step 4: Compile the graph ──────────────────────────────
# Compiling validates the graph and returns a runnable object
graph = graph_builder.compile()
# ── Step 5: Run the graph ──────────────────────────────────
result = graph.invoke({
"messages": [HumanMessage(content="What is LangGraph?")]
})
# The result is the final state
print(result["messages"][-1].content)
Understanding Each Step
1. StateGraph Construction
StateGraph(MessagesState) creates a new graph builder. The argument tells LangGraph the shape of the state. All nodes in this graph must accept and return data matching this state schema.
2. Adding Nodes
add_node("chatbot", chatbot) registers a Python function as a named node. The string "chatbot" is used to reference this node when defining edges.
3. Adding Edges
We add two edges: START -> chatbot (entry point) and chatbot -> END (exit). This creates a simple linear flow.
4. Compiling
.compile() validates the graph structure (checks for disconnected nodes, missing edges, etc.) and returns a compiled graph that can be invoked.
5. Invoking
.invoke() runs the graph with the given initial state. It returns the final state after all nodes have executed.
Interactive Chat Loop
Let's extend this into an interactive chat session:
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def chatbot(state: MessagesState) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
graph_builder = StateGraph(MessagesState)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
graph = graph_builder.compile()
# ── Interactive loop ───────────────────────────────────────
print("LangGraph Chatbot (type 'quit' to exit)")
print("-" * 45)
while True:
user_input = input("\nYou: ")
if user_input.lower() in ("quit", "exit", "q"):
print("Goodbye!")
break
result = graph.invoke({
"messages": [HumanMessage(content=user_input)]
})
ai_message = result["messages"][-1]
print(f"\nBot: {ai_message.content}")
invoke() call starts fresh. We will add memory with checkpointing in Step 9.Streaming Output
For a better user experience, you can stream the graph's output token by token:
# Stream events from the graph
for event in graph.stream(
{"messages": [HumanMessage(content="Explain LangGraph in 3 sentences.")]},
stream_mode="values",
):
# Each event contains the state after a node executes
last_msg = event["messages"][-1]
last_msg.pretty_print()
Step 7 of 11Tool Integration
Tools give your agent the ability to take actions in the real world: search the web, query databases, call APIs, perform calculations, and more. LangGraph makes tool integration seamless with the @tool decorator and prebuilt components.
Defining Tools with @tool
A tool is just a Python function decorated with @tool. The docstring becomes the tool's description that the LLM uses to decide when to call it.
from langchain_core.tools import tool
@tool
def web_search(query: str) -> str:
"""Search the web for current information about a topic.
Use this when you need up-to-date information that may not
be in your training data.
Args:
query: The search query string.
"""
# In production, use Tavily, SerpAPI, or similar
# For this tutorial, we'll simulate a search
return f"Search results for '{query}': LangGraph is a framework for building stateful AI agents using graph-based orchestration. Latest version is 0.2+."
@tool
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression.
Use this for any calculations the user asks about.
Args:
expression: A valid Python math expression (e.g., '2 + 2', '100 * 0.15').
"""
try:
result = eval(expression)
return f"Result: {result}"
except Exception as e:
return f"Error evaluating expression: {e}"
# Collect tools into a list
tools = [web_search, calculator]
Binding Tools to the LLM
You must tell the LLM about available tools using .bind_tools(). This adds tool schemas to the LLM's system prompt so it knows what tools it can call and their parameters.
from langchain_openai import ChatOpenAI
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
# Bind tools to the LLM - now it knows about web_search and calculator
llm_with_tools = llm.bind_tools(tools)
ToolNode: Automatic Tool Execution
LangGraph's prebuilt ToolNode automatically executes whatever tools the LLM decides to call. It reads tool call requests from the last AI message and returns the results as ToolMessage objects.
from langgraph.prebuilt import ToolNode
# Create a ToolNode from our list of tools
tool_node = ToolNode(tools)
# That's it! This node will:
# 1. Read the last AI message from state["messages"]
# 2. Extract any tool_calls from the message
# 3. Execute each tool with the given arguments
# 4. Return ToolMessage results appended to messages
tools_condition: Conditional Routing
The prebuilt tools_condition function checks whether the LLM's last message contains tool calls. If it does, execution routes to the tools node; otherwise, it routes to END.
from langgraph.prebuilt import tools_condition
# tools_condition returns:
# "tools" if the last AI message has tool_calls
# "__end__" if the last AI message has no tool_calls (just text)
# Use it as a conditional edge:
graph.add_conditional_edges("chatbot", tools_condition)
Complete Example: Chatbot with Tools
Here is the full working code for a chatbot that can search the web and do calculations:
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langgraph.prebuilt import ToolNode, tools_condition
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
# ── Define tools ───────────────────────────────────────────
@tool
def web_search(query: str) -> str:
"""Search the web for current information about a topic."""
return f"Search results for '{query}': LangGraph v0.2 supports cycles, persistence, and human-in-the-loop patterns."
@tool
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression."""
try:
return f"Result: {eval(expression)}"
except Exception as e:
return f"Error: {e}"
tools = [web_search, calculator]
# ── Set up the LLM with tools ─────────────────────────────
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
llm_with_tools = llm.bind_tools(tools)
# ── Define the chatbot node ───────────────────────────────
def chatbot(state: MessagesState) -> dict:
"""Call the LLM (which may decide to use tools)."""
response = llm_with_tools.invoke(state["messages"])
return {"messages": [response]}
# ── Build the graph ───────────────────────────────────────
graph_builder = StateGraph(MessagesState)
# Add nodes
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_node("tools", ToolNode(tools))
# Add edges
graph_builder.add_edge(START, "chatbot")
graph_builder.add_conditional_edges("chatbot", tools_condition)
graph_builder.add_edge("tools", "chatbot") # Loop back after tools
# Compile and run
graph = graph_builder.compile()
# ── Test it ───────────────────────────────────────────────
result = graph.invoke({
"messages": [HumanMessage(content="Search for the latest LangGraph features, then calculate 42 * 17")]
})
for msg in result["messages"]:
print(f"{msg.type}: {msg.content[:200]}")
if hasattr(msg, "tool_calls") and msg.tool_calls:
print(f" Tool calls: {[tc['name'] for tc in msg.tool_calls]}")
web_search and calculator, the ToolNode executes them, and the results are sent back to the LLM which then generates a final response combining all the information.Step 8 of 11Conditional Routing & Cycles
The real power of LangGraph lies in its ability to express cycles -- loops where the agent repeatedly reasons and acts until a task is complete. This is the foundation of the ReAct (Reason + Act) pattern.
The ReAct Pattern
ReAct is an agent design pattern where the LLM:
- Reasons about what to do next based on the current state
- Acts by calling a tool
- Observes the tool result
- Repeats until the task is complete
In LangGraph, this is implemented as a cycle between the agent node and the tools node.
The should_continue Pattern
A common approach is to write a should_continue function that decides whether the agent should keep looping or stop:
from typing import Literal
def should_continue(state: MessagesState) -> Literal["tools", "__end__"]:
"""Decide whether to continue the agent loop or stop.
This function is called after the agent node runs.
It checks if the LLM wants to call any tools.
"""
last_message = state["messages"][-1]
# If the LLM made tool calls, continue to the tools node
if last_message.tool_calls:
return "tools"
# Otherwise, the LLM is done -- go to END
return "__end__"
"__end__" is the internal representation of the END node. You can also use the END constant directly in your path_map. The prebuilt tools_condition does exactly what should_continue does above.Building a ReAct Agent with Cycles
Here is a complete ReAct agent that loops between reasoning and tool use:
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langgraph.prebuilt import ToolNode
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from typing import Literal
# ── Tools ──────────────────────────────────────────────────
@tool
def search_knowledge_base(query: str) -> str:
"""Search the internal knowledge base for information."""
knowledge = {
"langgraph": "LangGraph is a library for building stateful agents with LLMs using graph-based orchestration.",
"react": "ReAct is a pattern where agents alternate between reasoning and acting.",
"state": "State in LangGraph is a TypedDict that flows through every node.",
}
for key, value in knowledge.items():
if key in query.lower():
return value
return "No relevant information found."
@tool
def get_current_date() -> str:
"""Get the current date and time."""
from datetime import datetime
return datetime.now().strftime("%Y-%m-%d %H:%M:%S")
tools = [search_knowledge_base, get_current_date]
# ── LLM with tools ────────────────────────────────────────
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0).bind_tools(tools)
# ── Nodes ──────────────────────────────────────────────────
def agent(state: MessagesState) -> dict:
"""The reasoning node: calls the LLM to decide what to do."""
system = SystemMessage(content=(
"You are a helpful research assistant. "
"Use tools to look up information when needed. "
"When you have enough information, provide a final answer."
))
response = llm.invoke([system] + state["messages"])
return {"messages": [response]}
# ── Routing function ──────────────────────────────────────
def should_continue(state: MessagesState) -> Literal["tools", "__end__"]:
"""Check if the agent wants to use tools or is done."""
last_message = state["messages"][-1]
if last_message.tool_calls:
return "tools"
return "__end__"
# ── Build the graph ───────────────────────────────────────
graph_builder = StateGraph(MessagesState)
# Add nodes
graph_builder.add_node("agent", agent)
graph_builder.add_node("tools", ToolNode(tools))
# Define the flow with a cycle
graph_builder.add_edge(START, "agent")
graph_builder.add_conditional_edges("agent", should_continue)
graph_builder.add_edge("tools", "agent") # CYCLE: tools -> agent
# Compile
react_agent = graph_builder.compile()
# ── Run ───────────────────────────────────────────────────
result = react_agent.invoke({
"messages": [HumanMessage(
content="What is LangGraph? Also, what is today's date?"
)]
})
# Print the full conversation
for msg in result["messages"]:
role = msg.type.upper()
if hasattr(msg, "tool_calls") and msg.tool_calls:
print(f"\n{role}: [Calling tools: {[tc['name'] for tc in msg.tool_calls]}]")
elif msg.type == "tool":
print(f"TOOL ({msg.name}): {msg.content}")
else:
print(f"\n{role}: {msg.content}")
Execution Flow Walkthrough
Here is what happens when we run the agent above:
Adding Iteration Limits
To prevent infinite loops, you can add a maximum iteration count to your routing logic:
MAX_ITERATIONS = 5
def should_continue_with_limit(state: MessagesState) -> Literal["tools", "__end__"]:
"""Continue the loop, but enforce a maximum iteration count."""
last_message = state["messages"][-1]
# Count how many tool call rounds we've done
tool_call_count = sum(
1 for msg in state["messages"]
if hasattr(msg, "tool_calls") and msg.tool_calls
)
if tool_call_count >= MAX_ITERATIONS:
return "__end__" # Force stop after MAX_ITERATIONS
if last_message.tool_calls:
return "tools"
return "__end__"
recursion_limit parameter in the config when invoking: graph.invoke(input, config={"recursion_limit": 25}).Step 9 of 11Checkpointing & Human-in-the-Loop
Real-world agents need memory and human oversight. LangGraph provides built-in support for both through its checkpointing system.
MemorySaver: State Persistence
The MemorySaver checkpointer saves the graph state after every node execution. This enables:
- Conversation memory across multiple invocations
- Resume from interruption if the process crashes
- Time travel to replay or inspect previous states
from langgraph.checkpoint.memory import MemorySaver
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def chatbot(state: MessagesState) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
# Build graph
graph_builder = StateGraph(MessagesState)
graph_builder.add_node("chatbot", chatbot)
graph_builder.add_edge(START, "chatbot")
graph_builder.add_edge("chatbot", END)
# ── Compile WITH checkpointer ─────────────────────────────
memory = MemorySaver()
graph = graph_builder.compile(checkpointer=memory)
Thread-Based Conversations
To use memory, you pass a thread_id in the config. Each thread maintains its own independent conversation history.
# ── Conversation with memory ──────────────────────────────
config = {"configurable": {"thread_id": "user-123"}}
# First message
result1 = graph.invoke(
{"messages": [HumanMessage(content="My name is Alice.")]},
config=config,
)
print(result1["messages"][-1].content)
# Output: "Nice to meet you, Alice! How can I help you today?"
# Second message -- same thread, so the agent remembers!
result2 = graph.invoke(
{"messages": [HumanMessage(content="What is my name?")]},
config=config,
)
print(result2["messages"][-1].content)
# Output: "Your name is Alice!"
# Different thread -- no memory of Alice
config_new = {"configurable": {"thread_id": "user-456"}}
result3 = graph.invoke(
{"messages": [HumanMessage(content="What is my name?")]},
config=config_new,
)
print(result3["messages"][-1].content)
# Output: "I don't know your name. Could you tell me?"
thread_id acts like a session ID. All invocations with the same thread_id share the same conversation history. Different thread_id values are completely isolated.Human-in-the-Loop: Interrupt Before/After
LangGraph can pause execution before or after specific nodes, allowing a human to review, approve, or modify the state before the graph continues.
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage
from langchain_core.tools import tool
@tool
def send_email(to: str, subject: str, body: str) -> str:
"""Send an email to the specified recipient."""
return f"Email sent to {to} with subject '{subject}'."
tools = [send_email]
llm = ChatOpenAI(model="gpt-4o-mini").bind_tools(tools)
def agent(state: MessagesState) -> dict:
response = llm.invoke(state["messages"])
return {"messages": [response]}
# Build graph
graph_builder = StateGraph(MessagesState)
graph_builder.add_node("agent", agent)
graph_builder.add_node("tools", ToolNode(tools))
graph_builder.add_edge(START, "agent")
graph_builder.add_conditional_edges("agent", tools_condition)
graph_builder.add_edge("tools", "agent")
# ── Compile with interrupt BEFORE the tools node ──────────
memory = MemorySaver()
graph = graph_builder.compile(
checkpointer=memory,
interrupt_before=["tools"], # Pause before executing tools
)
# ── Run -- the graph will pause before tools ──────────────
config = {"configurable": {"thread_id": "approval-thread"}}
result = graph.invoke(
{"messages": [HumanMessage(
content="Send an email to bob@example.com about the meeting tomorrow"
)]},
config=config,
)
# The graph is now PAUSED before the tools node
# Check what tool the agent wants to call:
last_msg = result["messages"][-1]
print("Agent wants to call:")
for tc in last_msg.tool_calls:
print(f" {tc['name']}({tc['args']})")
# ── Human reviews and approves ────────────────────────────
# Option A: Approve -- just resume with None input
approved = input("Approve? (y/n): ")
if approved.lower() == "y":
# Resume execution from where it paused
final_result = graph.invoke(None, config=config)
print(final_result["messages"][-1].content)
else:
print("Action cancelled by human.")
How Interruption Works
| Parameter | When It Pauses | Use Case |
|---|---|---|
interrupt_before=["tools"] |
Before the tools node runs | Approve tool calls before execution |
interrupt_after=["agent"] |
After the agent node runs | Review agent's reasoning |
MemorySaver stores state in memory (lost on restart). For persistence, use SqliteSaver or PostgresSaver from langgraph-checkpoint-sqlite or langgraph-checkpoint-postgres.Step 10 of 11Complete Project – Multi-Agent Task System
In this final step, we bring everything together to build a multi-agent research assistant using the supervisor pattern. A supervisor agent decides which specialized worker agent should handle each part of a task.
Architecture: Supervisor Pattern
The supervisor pattern uses a central orchestrator (the supervisor) that routes tasks to specialized agents:
Figure 3: The Supervisor pattern -- a central orchestrator routes tasks to specialized worker agents.
Full Working Code
"""
Multi-Agent Research Assistant with Supervisor Pattern
=====================================================
A complete LangGraph application with:
- Supervisor agent (orchestrator)
- Researcher agent (web search)
- Analyst agent (data analysis)
- Writer agent (report generation)
- Memory persistence
- Human-in-the-loop approval
"""
from typing import TypedDict, Annotated, Literal
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import add_messages
from langgraph.checkpoint.memory import MemorySaver
from langchain_openai import ChatOpenAI
from langchain_core.messages import (
HumanMessage,
AIMessage,
SystemMessage,
)
from langchain_core.tools import tool
import operator
# ══════════════════════════════════════════════════════════
# 1. DEFINE THE STATE
# ══════════════════════════════════════════════════════════
class MultiAgentState(TypedDict):
"""Shared state for the multi-agent system."""
messages: Annotated[list, add_messages] # Conversation history
task: str # Current task description
research_data: str # Data from researcher
analysis: str # Analysis from analyst
report: str # Final report from writer
next_agent: str # Who should run next
iteration_count: Annotated[int, operator.add] # Safety counter
# ══════════════════════════════════════════════════════════
# 2. DEFINE TOOLS
# ══════════════════════════════════════════════════════════
@tool
def web_search(query: str) -> str:
"""Search the web for information on a topic."""
# Simulated search results for the tutorial
results = {
"AI agents": "AI agents are autonomous systems that perceive their environment and take actions to achieve goals. Key frameworks include LangGraph, CrewAI, and AutoGen.",
"LangGraph": "LangGraph v0.2+ supports stateful agents, cycles, human-in-the-loop, and multi-agent architectures.",
"market trends": "The AI agent market is projected to reach $65B by 2030, with enterprise adoption growing 40% annually.",
}
for key, value in results.items():
if key.lower() in query.lower():
return value
return f"Found general information about: {query}"
@tool
def analyze_data(data: str) -> str:
"""Analyze research data and extract key insights."""
word_count = len(data.split())
return (
f"Analysis of {word_count}-word dataset:\n"
f"- Key themes identified: AI agents, market growth, framework adoption\n"
f"- Sentiment: Positive (high growth trajectory)\n"
f"- Confidence: High (multiple corroborating sources)"
)
# ══════════════════════════════════════════════════════════
# 3. DEFINE AGENT NODES
# ══════════════════════════════════════════════════════════
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0)
def supervisor_node(state: MultiAgentState) -> dict:
"""The supervisor decides which agent should work next."""
system = SystemMessage(content="""You are a project supervisor managing a research team.
Based on the current state of the project, decide who should work next:
- "researcher": if you need more information gathered
- "analyst": if you have research data that needs analysis
- "writer": if analysis is done and you need a final report
- "FINISH": if the report is complete
Respond with ONLY one of: researcher, analyst, writer, FINISH""")
context_msg = HumanMessage(content=f"""
Current project state:
- Task: {state.get('task', 'Not set')}
- Research data: {'Available' if state.get('research_data') else 'Not yet gathered'}
- Analysis: {'Available' if state.get('analysis') else 'Not yet done'}
- Report: {'Available' if state.get('report') else 'Not yet written'}
- Iterations so far: {state.get('iteration_count', 0)}
Who should work next?""")
response = llm.invoke([system, context_msg])
next_agent = response.content.strip().lower()
# Validate the response
valid_agents = {"researcher", "analyst", "writer", "finish"}
if next_agent not in valid_agents:
next_agent = "finish" # Default to finish if unclear
return {
"next_agent": next_agent,
"messages": [AIMessage(content=f"Supervisor: Routing to {next_agent}")],
"iteration_count": 1,
}
def researcher_node(state: MultiAgentState) -> dict:
"""The researcher gathers information using web search."""
system = SystemMessage(content=(
"You are a research specialist. Use the web_search tool to "
"gather comprehensive information about the given task. "
"Compile all findings into a clear summary."
))
task_msg = HumanMessage(content=f"Research this topic: {state['task']}")
llm_with_search = llm.bind_tools([web_search])
response = llm_with_search.invoke([system, task_msg])
# Execute tool calls if any
research_results = []
if response.tool_calls:
for tc in response.tool_calls:
result = web_search.invoke(tc["args"])
research_results.append(result)
research_data = "\n".join(research_results) if research_results else response.content
return {
"research_data": research_data,
"messages": [AIMessage(content=f"Researcher: {research_data[:200]}...")],
}
def analyst_node(state: MultiAgentState) -> dict:
"""The analyst processes research data and extracts insights."""
system = SystemMessage(content=(
"You are a data analyst. Analyze the provided research data "
"and produce structured insights with key findings."
))
data_msg = HumanMessage(
content=f"Analyze this research data:\n{state['research_data']}"
)
response = llm.invoke([system, data_msg])
return {
"analysis": response.content,
"messages": [AIMessage(content=f"Analyst: {response.content[:200]}...")],
}
def writer_node(state: MultiAgentState) -> dict:
"""The writer produces the final report."""
system = SystemMessage(content=(
"You are a technical writer. Using the research data and analysis "
"provided, write a concise, professional report."
))
report_msg = HumanMessage(content=f"""
Write a report based on:
RESEARCH DATA:
{state['research_data']}
ANALYSIS:
{state['analysis']}
Original task: {state['task']}
""")
response = llm.invoke([system, report_msg])
return {
"report": response.content,
"messages": [AIMessage(content=f"Writer: Report complete.")],
}
# ══════════════════════════════════════════════════════════
# 4. ROUTING FUNCTION
# ══════════════════════════════════════════════════════════
def route_supervisor(
state: MultiAgentState,
) -> Literal["researcher", "analyst", "writer", "__end__"]:
"""Route to the next agent based on supervisor's decision."""
next_agent = state.get("next_agent", "finish")
# Safety: stop after too many iterations
if state.get("iteration_count", 0) > 10:
return "__end__"
if next_agent == "researcher":
return "researcher"
elif next_agent == "analyst":
return "analyst"
elif next_agent == "writer":
return "writer"
else:
return "__end__"
# ══════════════════════════════════════════════════════════
# 5. BUILD THE GRAPH
# ══════════════════════════════════════════════════════════
graph_builder = StateGraph(MultiAgentState)
# Add all nodes
graph_builder.add_node("supervisor", supervisor_node)
graph_builder.add_node("researcher", researcher_node)
graph_builder.add_node("analyst", analyst_node)
graph_builder.add_node("writer", writer_node)
# Entry point
graph_builder.add_edge(START, "supervisor")
# Supervisor routes to workers (or END)
graph_builder.add_conditional_edges("supervisor", route_supervisor)
# All workers report back to the supervisor
graph_builder.add_edge("researcher", "supervisor")
graph_builder.add_edge("analyst", "supervisor")
graph_builder.add_edge("writer", "supervisor")
# Compile with memory
memory = MemorySaver()
multi_agent = graph_builder.compile(checkpointer=memory)
# ══════════════════════════════════════════════════════════
# 6. RUN THE SYSTEM
# ══════════════════════════════════════════════════════════
config = {"configurable": {"thread_id": "research-project-1"}}
result = multi_agent.invoke(
{
"messages": [HumanMessage(content="Start the research project")],
"task": "Analyze the current state of AI agent frameworks and market trends",
"research_data": "",
"analysis": "",
"report": "",
"next_agent": "",
"iteration_count": 0,
},
config=config,
)
# Print the final report
print("=" * 60)
print("FINAL REPORT")
print("=" * 60)
print(result.get("report", "No report generated"))
print("\n" + "=" * 60)
print("CONVERSATION LOG")
print("=" * 60)
for msg in result["messages"]:
print(f" {msg.content[:100]}")
How to Extend This System
- Add more agents: Create new node functions and register them with the supervisor's routing logic
- Add real tools: Replace simulated tools with Tavily for web search, database queries, or API calls
- Add human approval: Use
interrupt_before=["writer"]to review the report before it is finalized - Add persistent storage: Replace
MemorySaverwithPostgresSaverfor production deployments - Add streaming: Use
graph.stream()to stream intermediate results to the user
Next Steps & Resources
| Resource | URL |
|---|---|
| LangGraph Documentation | langchain-ai.github.io/langgraph |
| LangGraph GitHub | github.com/langchain-ai/langgraph |
| LangGraph Tutorials | Official Tutorial Collection |
| LangSmith (Tracing) | smith.langchain.com |
Step 11 of 11Django Integration – Full Stack LangGraph App
In this step, we build a fully functional Django web application that integrates LangGraph as the AI backend. Users can chat with an AI agent through a web interface, and the LangGraph pipeline handles tool calls, state management, and conversation memory.
Project Structure
langgraph_django/
├── manage.py
├── requirements.txt
├── .env
├── config/
│ ├── __init__.py
│ ├── settings.py
│ ├── urls.py
│ └── wsgi.py
├── chat/
│ ├── __init__.py
│ ├── urls.py
│ ├── views.py
│ ├── agent.py # LangGraph agent logic
│ └── templates/
│ └── chat/
│ └── index.html # Chat UI
└── static/
└── chat/
└── style.css
Step 1: Create Django Project & Install Dependencies
# Create project directory
mkdir langgraph_django && cd langgraph_django
# Create virtual environment
python -m venv venv
source venv/bin/activate # On Windows: venv\Scripts\activate
# Install dependencies
pip install django langgraph langchain-openai langchain-core langchain-community python-dotenv
# Create Django project and app
django-admin startproject config .
python manage.py startapp chat
Step 2: requirements.txt
django>=5.0
langgraph>=0.2.0
langchain-openai>=0.2.0
langchain-core>=0.3.0
langchain-community>=0.3.0
python-dotenv>=1.0.0
Step 3: Environment Variables (.env)
OPENAI_API_KEY=sk-your-openai-api-key-here
DJANGO_SECRET_KEY=your-django-secret-key
DEBUG=True
.env file to version control. Add it to .gitignore.Step 4: Django Settings (config/settings.py)
import os
from pathlib import Path
from dotenv import load_dotenv
load_dotenv()
BASE_DIR = Path(__file__).resolve().parent.parent
SECRET_KEY = os.getenv("DJANGO_SECRET_KEY", "change-me-in-production")
DEBUG = os.getenv("DEBUG", "False") == "True"
ALLOWED_HOSTS = ["*"]
INSTALLED_APPS = [
"django.contrib.admin",
"django.contrib.auth",
"django.contrib.contenttypes",
"django.contrib.sessions",
"django.contrib.messages",
"django.contrib.staticfiles",
"chat", # Our chat app
]
MIDDLEWARE = [
"django.middleware.security.SecurityMiddleware",
"django.contrib.sessions.middleware.SessionMiddleware",
"django.middleware.common.CommonMiddleware",
"django.middleware.csrf.CsrfViewMiddleware",
"django.contrib.auth.middleware.AuthenticationMiddleware",
"django.contrib.messages.middleware.MessageMiddleware",
"django.middleware.clickjacking.XFrameOptionsMiddleware",
]
ROOT_URLCONF = "config.urls"
TEMPLATES = [
{
"BACKEND": "django.template.backends.django.DjangoTemplates",
"DIRS": [],
"APP_DIRS": True,
"OPTIONS": {
"context_processors": [
"django.template.context_processors.debug",
"django.template.context_processors.request",
"django.contrib.auth.context_processors.auth",
"django.contrib.messages.context_processors.messages",
],
},
},
]
WSGI_APPLICATION = "config.wsgi.application"
DATABASES = {
"default": {
"ENGINE": "django.db.backends.sqlite3",
"NAME": BASE_DIR / "db.sqlite3",
}
}
STATIC_URL = "static/"
STATICFILES_DIRS = [BASE_DIR / "static"]
DEFAULT_AUTO_FIELD = "django.db.models.BigAutoField"
Step 5: LangGraph Agent (chat/agent.py)
This is the core AI logic. We build a LangGraph agent with tools (calculator and search), checkpointing for conversation memory, and thread-based sessions.
"""
LangGraph Agent for Django Integration
---------------------------------------
A conversational AI agent with tool-calling capabilities,
built with LangGraph and integrated into Django.
"""
import os
from typing import Annotated
from dotenv import load_dotenv
from langchain_openai import ChatOpenAI
from langchain_core.messages import HumanMessage, SystemMessage
from langchain_core.tools import tool
from langgraph.graph import StateGraph, START, END
from langgraph.graph.message import MessagesState, add_messages
from langgraph.prebuilt import ToolNode, tools_condition
from langgraph.checkpoint.memory import MemorySaver
load_dotenv()
# ── Define Tools ──────────────────────────────────────────
@tool
def calculator(expression: str) -> str:
"""Evaluate a mathematical expression. Use this for any math calculations.
Examples: '2 + 2', '15 * 3.5', '100 / 7', 'pow(2, 10)'
"""
try:
# Safe evaluation of math expressions
allowed_names = {"pow": pow, "abs": abs, "round": round, "min": min, "max": max}
result = eval(expression, {"__builtins__": {}}, allowed_names)
return f"Result: {result}"
except Exception as e:
return f"Error calculating '{expression}': {str(e)}"
@tool
def get_current_time() -> str:
"""Get the current date and time."""
from datetime import datetime
now = datetime.now()
return f"Current date and time: {now.strftime('%Y-%m-%d %H:%M:%S')}"
@tool
def search_knowledge(query: str) -> str:
"""Search for information about a topic. Use this when the user asks
about facts, concepts, or anything you need to look up.
"""
# In production, connect to a real search API (Tavily, Google, etc.)
# For this demo, return a helpful response
return (
f"Search results for '{query}': This is a demo search tool. "
f"In production, connect this to Tavily, Google Search API, "
f"or your own knowledge base for real results."
)
# ── Build the LangGraph Agent ────────────────────────────
# Initialize LLM with tools
tools = [calculator, get_current_time, search_knowledge]
llm = ChatOpenAI(model="gpt-4o-mini", temperature=0.1)
llm_with_tools = llm.bind_tools(tools)
def chatbot_node(state: MessagesState) -> dict:
"""The main chatbot node that processes messages with the LLM."""
system_msg = SystemMessage(content=(
"You are a helpful AI assistant integrated into a Django web application. "
"You can perform calculations, tell the current time, and search for information. "
"Be concise and helpful. Use the available tools when appropriate."
))
response = llm_with_tools.invoke([system_msg] + state["messages"])
return {"messages": [response]}
# Build the graph
def create_agent_graph():
"""Create and compile the LangGraph agent."""
graph = StateGraph(MessagesState)
# Add nodes
graph.add_node("chatbot", chatbot_node)
graph.add_node("tools", ToolNode(tools=tools))
# Add edges
graph.add_edge(START, "chatbot")
graph.add_conditional_edges("chatbot", tools_condition)
graph.add_edge("tools", "chatbot")
# Compile with memory checkpointing
memory = MemorySaver()
return graph.compile(checkpointer=memory)
# ── Singleton Agent Instance ──────────────────────────────
# Create one agent instance shared across requests
agent = create_agent_graph()
def chat_with_agent(user_message: str, thread_id: str = "default") -> str:
"""
Send a message to the LangGraph agent and get a response.
Args:
user_message: The user's input message
thread_id: Unique thread ID for conversation memory
Returns:
The agent's response as a string
"""
config = {"configurable": {"thread_id": thread_id}}
input_msg = {"messages": [HumanMessage(content=user_message)]}
result = agent.invoke(input_msg, config=config)
# Extract the last AI message
ai_message = result["messages"][-1]
return ai_message.content
- The agent is created once as a module-level singleton — avoids rebuilding the graph on every request
MemorySaverenables conversation memory via thread IDs — each user session remembers contexttools_conditionautomatically routes to tools when the LLM requests them, then loops back to the chatbot
Step 6: Django URLs
from django.contrib import admin
from django.urls import path, include
urlpatterns = [
path("admin/", admin.site.urls),
path("", include("chat.urls")),
]
from django.urls import path
from . import views
urlpatterns = [
path("", views.chat_page, name="chat_page"),
path("api/chat/", views.chat_api, name="chat_api"),
path("api/reset/", views.reset_chat, name="reset_chat"),
]
Step 7: Django Views (chat/views.py)
import json
import uuid
from django.shortcuts import render
from django.http import JsonResponse
from django.views.decorators.csrf import csrf_exempt
from django.views.decorators.http import require_http_methods
from .agent import chat_with_agent
def chat_page(request):
"""Render the chat UI page."""
# Assign a unique thread_id per session for conversation memory
if "thread_id" not in request.session:
request.session["thread_id"] = str(uuid.uuid4())
return render(request, "chat/index.html")
@csrf_exempt
@require_http_methods(["POST"])
def chat_api(request):
"""API endpoint: receive user message, return agent response."""
try:
data = json.loads(request.body)
user_message = data.get("message", "").strip()
if not user_message:
return JsonResponse({"error": "Message cannot be empty"}, status=400)
# Get or create thread_id from session
if "thread_id" not in request.session:
request.session["thread_id"] = str(uuid.uuid4())
thread_id = request.session["thread_id"]
# Call the LangGraph agent
response = chat_with_agent(user_message, thread_id=thread_id)
return JsonResponse({
"response": response,
"thread_id": thread_id,
})
except json.JSONDecodeError:
return JsonResponse({"error": "Invalid JSON"}, status=400)
except Exception as e:
return JsonResponse({"error": f"Agent error: {str(e)}"}, status=500)
@csrf_exempt
@require_http_methods(["POST"])
def reset_chat(request):
"""Reset the conversation by assigning a new thread_id."""
request.session["thread_id"] = str(uuid.uuid4())
return JsonResponse({"status": "ok", "message": "Conversation reset."})
Step 8: Chat UI Template (chat/templates/chat/index.html)
A clean, modern chat interface that communicates with our Django API via fetch.
<!DOCTYPE html>
<html lang="en">
<head>
<meta charset="UTF-8">
<meta name="viewport" content="width=device-width, initial-scale=1.0">
<title>LangGraph Chat</title>
<style>
* { margin:0; padding:0; box-sizing:border-box; }
body {
font-family: -apple-system, BlinkMacSystemFont, 'Segoe UI', sans-serif;
background: #0b0f19; color: #f1f5f9;
height: 100vh; display: flex; flex-direction: column;
}
.header {
background: #111827; border-bottom: 1px solid rgba(255,255,255,0.08);
padding: 16px 24px; display: flex; justify-content: space-between; align-items: center;
}
.header h1 { font-size: 1.2rem; font-weight: 700; }
.header h1 span { color: #3b82f6; }
.reset-btn {
background: #1e293b; border: 1px solid rgba(255,255,255,0.08);
color: #94a3b8; padding: 8px 16px; border-radius: 8px;
cursor: pointer; font-size: 0.85rem; transition: all 0.2s;
}
.reset-btn:hover { background: #ef4444; color: #fff; border-color: #ef4444; }
.chat-container {
flex: 1; overflow-y: auto; padding: 24px;
display: flex; flex-direction: column; gap: 16px;
}
.message { max-width: 75%; padding: 14px 18px; border-radius: 16px; line-height: 1.6; font-size: 0.95rem; }
.message.user {
align-self: flex-end; background: #3b82f6; color: #fff;
border-bottom-right-radius: 4px;
}
.message.assistant {
align-self: flex-start; background: #1e293b;
border: 1px solid rgba(255,255,255,0.06);
border-bottom-left-radius: 4px;
}
.message.assistant pre {
background: #0d1117; padding: 12px; border-radius: 8px;
margin: 8px 0; overflow-x: auto; font-size: 0.85rem;
}
.message.assistant code { font-family: 'JetBrains Mono', monospace; }
.typing { opacity: 0.5; font-style: italic; }
.input-area {
background: #111827; border-top: 1px solid rgba(255,255,255,0.08);
padding: 16px 24px; display: flex; gap: 12px;
}
.input-area input {
flex: 1; background: #1e293b; border: 1px solid rgba(255,255,255,0.08);
color: #f1f5f9; padding: 14px 18px; border-radius: 12px;
font-size: 1rem; outline: none; transition: border-color 0.2s;
}
.input-area input:focus { border-color: #3b82f6; }
.input-area input::placeholder { color: #475569; }
.send-btn {
background: #3b82f6; color: #fff; border: none;
padding: 14px 24px; border-radius: 12px; cursor: pointer;
font-size: 1rem; font-weight: 600; transition: background 0.2s;
}
.send-btn:hover { background: #2563eb; }
.send-btn:disabled { background: #1e293b; color: #475569; cursor: not-allowed; }
.welcome { text-align: center; margin: auto; color: #94a3b8; }
.welcome h2 { font-size: 1.5rem; color: #f1f5f9; margin-bottom: 8px; }
.welcome p { font-size: 0.95rem; }
.tools-list { display: flex; gap: 8px; justify-content: center; margin-top: 16px; flex-wrap: wrap; }
.tools-list span {
background: #1e293b; border: 1px solid rgba(255,255,255,0.08);
padding: 6px 14px; border-radius: 20px; font-size: 0.8rem; color: #3b82f6;
}
</style>
</head>
<body>
<div class="header">
<h1>🤖 <span>LangGraph</span> Chat</h1>
<button class="reset-btn" onclick="resetChat()">🗑 New Chat</button>
</div>
<div class="chat-container" id="chatContainer">
<div class="welcome">
<h2>Welcome to LangGraph Chat</h2>
<p>Ask me anything! I can calculate, tell the time, and search for info.</p>
<div class="tools-list">
<span>📊 Calculator</span>
<span>🕒 Current Time</span>
<span>🔍 Search</span>
</div>
</div>
</div>
<div class="input-area">
<input type="text" id="messageInput" placeholder="Type your message..."
onkeydown="if(event.key==='Enter') sendMessage()">
<button class="send-btn" id="sendBtn" onclick="sendMessage()">Send</button>
</div>
<script>
const chatContainer = document.getElementById('chatContainer');
const messageInput = document.getElementById('messageInput');
const sendBtn = document.getElementById('sendBtn');
let firstMessage = true;
function addMessage(content, role) {
if (firstMessage) {
const welcome = chatContainer.querySelector('.welcome');
if (welcome) welcome.remove();
firstMessage = false;
}
const div = document.createElement('div');
div.className = `message ${role}`;
div.textContent = content;
chatContainer.appendChild(div);
chatContainer.scrollTop = chatContainer.scrollHeight;
return div;
}
async function sendMessage() {
const message = messageInput.value.trim();
if (!message) return;
addMessage(message, 'user');
messageInput.value = '';
sendBtn.disabled = true;
const typing = addMessage('Thinking...', 'assistant typing');
try {
const res = await fetch('/api/chat/', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ message })
});
const data = await res.json();
typing.remove();
if (data.error) {
addMessage('Error: ' + data.error, 'assistant');
} else {
addMessage(data.response, 'assistant');
}
} catch (err) {
typing.remove();
addMessage('Connection error. Please try again.', 'assistant');
}
sendBtn.disabled = false;
messageInput.focus();
}
async function resetChat() {
await fetch('/api/reset/', { method: 'POST' });
chatContainer.innerHTML = `
<div class="welcome">
<h2>Welcome to LangGraph Chat</h2>
<p>Ask me anything! I can calculate, tell the time, and search for info.</p>
<div class="tools-list">
<span>📊 Calculator</span>
<span>🕒 Current Time</span>
<span>🔍 Search</span>
</div>
</div>`;
firstMessage = true;
}
</script>
</body>
</html>
Step 9: Run the Application
# Apply migrations
python manage.py migrate
# Run the development server
python manage.py runserver
# Open in browser: http://127.0.0.1:8000/
http://127.0.0.1:8000 in your browser. You'll see the chat UI. Try these messages:
What is 1024 * 768?— triggers the calculator toolWhat time is it?— triggers the time toolSearch for LangGraph architecture— triggers the search toolExplain what LangGraph is— direct LLM response, no tools
How It All Connects
Request flow: Browser → Django view → LangGraph agent → LLM + Tools → Response
Extending the Project
- Add streaming responses using Django's
StreamingHttpResponsewithagent.stream() - Connect a real search API (Tavily, SerpAPI) to the search tool
- Add user authentication and per-user thread management
- Store conversations in a database instead of MemorySaver for persistence across restarts
- Add file upload for RAG (Retrieval-Augmented Generation) with user documents
- Deploy with Gunicorn + Nginx for production