Design Smarter AI Workflows with LangGraph: State, Messages, Context, and Control Flow Demystified
LangGraph isn’t just another workflow engine—it’s a flexible, stateful, and intelligent system for managing complex AI flows. Whether you’re building a chatbot, a multi-agent orchestration system, or a retrieval-augmented generator (RAG), LangGraph gives you tools to:
- Maintain evolving state
- Control the flow of execution
- Track full message history
- Inject runtime dependencies
- Speed up performance with caching
In this blog, we explore State, add_messages
, Send
, Command
, and Runtime Context
through clear examples and visual diagrams.
🔢 Understanding State: The Core of LangGraph
In LangGraph, State is a shared, typed dictionary that flows through your graph. Nodes read from and write to this state
, which evolves over time.
✅ Use Case: RAG (Retrieval-Augmented Generation) Workflow
📄 State Definition
from typing_extensions import TypedDict
from langgraph.graph.message import add_messages
from langchain_core.messages import AnyMessage
from typing import Annotated
class RAGState(TypedDict):
query: str
retrieved_docs: list[str]
messages: Annotated[list[AnyMessage], add_messages]
response: str
This code defines a state schema named RAGState
for a RAG pipeline. It tracks the user query, retrieved documents, full message history (merged automatically), and the final response.
🌐 Workflow Overview
flowchart TD
A[User Input] --> B[Vector Search]
B --> C[Generate Response]
C --> D[Return Response]
Each node adds fields to the state or appends to messages
.
📊 Node: User Input
def get_query(state: RAGState):
return {
"query": "What is LangGraph?",
"messages": [HumanMessage(content="What is LangGraph?")]
}
This node captures the user’s query and appends it to the messages
list.
📊 Node: Vector Search
def retrieve_docs(state: RAGState):
docs = vector_search(state["query"])
return {"retrieved_docs": docs}
This node performs a vector search based on the query and stores the results in retrieved_docs
.
📊 Node: Generate Response
def generate_response(state: RAGState):
context = "\n".join(state["retrieved_docs"])
answer = call_llm(f"{context}\n\nAnswer: {state['query']}")
return {
"response": answer,
"messages": [AIMessage(content=answer)]
}
This node takes the retrieved documents, sends them to an LLM for response generation, saves the output to response
, and appends the AI’s reply to messages
.
📈 Working with add_messages
: Track Chat History Seamlessly
By annotating a field with add_messages
, LangGraph automatically merges messages from multiple nodes. No manual merging required.
✅ ChatState Example
class ChatState(TypedDict):
messages: Annotated[list[AnyMessage], add_messages]
This defines a minimal state where the only tracked field is messages
. Thanks to add_messages
, all messages from different nodes will be appended.
✅ Message-Adding Nodes
def user_input(state: ChatState):
return {"messages": [HumanMessage(content="Tell me a joke about cats")]}
def agent_response(state: ChatState):
return {"messages": [AIMessage(content="Why was the cat on the computer? It wanted to keep an eye on the mouse!")]}
These nodes simulate a simple interaction between a user and an AI agent, each appending a message to the shared chat history.
🌐 Message Flow
flowchart TD
A[User Input] --> B[Agent Response]
📖 Final messages
[
{"type": "human", "content": "Tell me a joke about cats"},
{"type": "ai", "content": "Why was the cat on the computer? It wanted to keep an eye on the mouse!"}
]
LangGraph auto-merges messages without needing manual list handling.
🌐 Dynamic Fan-out with Send
When you want to dynamically split your state into multiple downstream paths (e.g., MapReduce), use Send
.
📋 Use Case: Generate a joke for each subject
from langgraph.graph import Send
def continue_to_jokes(state):
return [Send("generate_joke", {"subject": s}) for s in state["subjects"]]
This function generates one Send
for each subject in state["subjects"]
, effectively creating multiple parallel paths to generate_joke
.
📊 Conditional Edges
graph.add_conditional_edges("get_subjects", continue_to_jokes)
This adds conditional logic to the graph so that the outputs of get_subjects
fan out to multiple invocations of generate_joke
.
▶️ Visual
flowchart TD
A[Split Subjects] -->|dogs| B[generate_joke]
A -->|cats| B
Each subject gets its own execution path.
🔄 Combining State and Routing with Command
Instead of using conditional edges + state updates separately, you can return a Command
object.
✅ Example
from langgraph.graph import Command
def judge(state):
if state["x"] > 0:
return Command(update={"result": "positive"}, goto="handle_positive")
else:
return Command(update={"result": "negative"}, goto="handle_negative")
This node updates the state and chooses the next node (handle_positive
or handle_negative
) based on the value of x
.
🌐 Visual
flowchart TD
A[Judge] -->|x > 0| B[handle_positive]
A -->|else| C[handle_negative]
🏠 Injecting Runtime Dependencies with ContextSchema
Use context_schema
to inject non-state config (like LLM provider, DB connection) at runtime.
✅ Step 1: Define context
from dataclasses import dataclass
@dataclass
class ContextSchema:
llm_provider: str
This defines a runtime-only configuration schema with an LLM provider string.
✅ Step 2: Use in graph
from langgraph.graph import StateGraph
graph = StateGraph(State, context_schema=ContextSchema)
Register the context schema with the graph.
✅ Step 3: Access inside a node
from langgraph.runtime import Runtime
def node_a(state, runtime: Runtime[ContextSchema]):
llm = get_llm(runtime.context.llm_provider)
return {"response": llm.generate(...)}
This node accesses the injected LLM provider from the runtime context.
⚡️ Speeding Things Up with Node Caching
Use CachePolicy
to skip re-executing expensive nodes if input hasn’t changed.
✅ Example
from langgraph.cache.memory import InMemoryCache
from langgraph.types import CachePolicy
def expensive_node(state):
time.sleep(2)
return {"result": state["x"] * 2}
builder.add_node("expensive_node", expensive_node, cache_policy=CachePolicy(ttl=5))
graph = builder.compile(cache=InMemoryCache())
This example defines a slow node (simulated with time.sleep
) and attaches a caching policy with a TTL of 5 seconds.
🥇 Conclusion: Build Smarter Agent Systems
LangGraph is more than just a DAG engine—it’s a dynamic, state-aware, context-injecting, caching-enabled framework for building AI-first systems.
By mastering:
- 🌐
State
andadd_messages
- 🌐 Dynamic routing via
Send
- 🌐 In-node routing via
Command
- 🌐 Config injection via
ContextSchema
- ⚡️ Node-level performance via
CachePolicy
you’re ready to build scalable, intelligent workflows that think, remember, and adapt.
Want more?
- Visit LangGraph Docs
- Try it in your chatbot, planning agent, or data pipeline today!