LangGraph vs LangChain
State graphs, nodes, edges, checkpointing, persistence, and threading—demystified.
How They Relate
LangChain is a framework for building LLM apps (prompts, chains, tools, memory, agents). LangGraph is a library built on top of LangChain that adds graph-based control flow so you can build complex, stateful, loop‑capable workflows.
| Concept | LangChain | LangGraph |
|---|---|---|
| Type | Framework | Library on LangChain |
| Flow | Mostly linear (chains) | Graph-based (branching, loops, async) |
| State | Basic memory | Shared, persistent state across nodes |
| Use cases | Prompt pipelines | Multi‑agent systems, conversation loops, complex workflows |
State Graph: The Mental Model
A state graph is like a state machine specialized for AI systems. A structured state object flows through the graph while nodes do work and edges decide what runs next. Nodes do not live in the state; they read from it and write to it.
[UserInput] → [Analyze] → [Decide]
│
if search_needed ─────▶ [WebSearch] ──▶ [Synthesize]
└──────────▶ [AnswerDirectly]
Nodes
A node is a unit of computation: a LangChain chain/agent, a plain function, or even a nested subgraph. Each node takes the current state, performs work, and returns an updated state.
# pseudo-Python
def search_node(state):
query = state["analysis"]["query"]
results = web_search(query)
state["search_results"] = results
return state
Edges
Edges implement control flow: if/then/else, branching, and loops. After a node updates state, an edge inspects it to choose the next node.
Checkpointing and Persistence
Checkpointing saves a snapshot of progress and state right after a node finishes. That enables pause/resume, replay, and fault recovery. Persistence is the backend that stores those snapshots (memory, SQLite, Postgres, Redis, etc.).
# conceptual API
from langgraph.checkpoint.memory import MemorySaver
# or:
from langgraph.checkpoint.sqlite import SqliteSaver
checkpointer = SqliteSaver("graph_state.sqlite")
graph = build_graph(nodes, edges, checkpointer=checkpointer)
thread_id = "customer-123"
graph.run(input_state, thread_id=thread_id) # checkpoints after each node
Threading
Thread = one storyline of state. Each conversation or task gets its own thread with its own checkpoint history. The same graph definition can run many threads concurrently—exactly what you want for production agents.
Example: Support Agent Graph
ClassifyIntentSearchKnowledgeBaseGenerateAnswerEscalateToHuman
Edges route “simple” intents to search/answer, and “complex” intents to escalation. After every node, the graph checkpoints; if the process restarts, it resumes from the last completed node, not the beginning.
Takeaway: LangChain provides the building blocks. LangGraph arranges those blocks into a resilient, branching, resumable system using shared state, edges for decisions, and persistence‑backed checkpoints.