Skip to content
Homeβ€Ί JavaScriptβ€Ί Building Multi-Agent AI Systems with Next.js and LangGraph

Building Multi-Agent AI Systems with Next.js and LangGraph

Where developers are forged. Β· Structured learning Β· Free forever.
πŸ“ Part of: React.js β†’ Topic 45 of 47
Advanced tutorial on creating collaborative AI agents using Next.
πŸ”₯ Advanced β€” solid JavaScript foundation required
In this tutorial, you'll learn
Advanced tutorial on creating collaborative AI agents using Next.
  • LangGraph models agent workflows as directed graphs β€” nodes execute functions, edges route conditionally, state flows between them. If you cannot draw it as a flowchart, you cannot build it as a graph.
  • Decompose monolith agents into specialists β€” each with 2-3 tools and a focused 200-400 token prompt. If an agent's system prompt exceeds 500 tokens, it is doing too much.
  • Always set maxIterations on graph compilation β€” unbounded cycles burn tokens and produce no output. Add revision_count and token_budget to the state for additional guards.
✦ Plain-English analogy ✦ Real code with output ✦ Interview questions
⚑Quick Answer
  • LangGraph models agent workflows as directed graphs β€” nodes are functions, edges are conditional routing, state flows between them
  • Multi-agent systems split a complex task across specialized agents β€” researcher, writer, reviewer β€” each with its own tools and prompts
  • Supabase persists agent state across requests β€” conversation history, tool outputs, and intermediate reasoning survive restarts
  • Human-in-the-loop nodes pause the graph and wait for approval before executing high-risk actions (file writes, API calls, deletions)
  • Production failure: unbounded graph cycles burn 47,000 tokens when two agents debate endlessly β€” maxIterations is mandatory
  • Biggest mistake: building one mega-agent that does everything β€” specialized agents with clear boundaries are more reliable and debuggable
🚨 START HERE
LangGraph Debug Cheat Sheet
Fast diagnostics for graph loops, state loss, and agent failures in LangGraph multi-agent systems
🟑Infinite loop between agents
Immediate ActionAdd maxIterations to graph.compile() and check LangSmith traces
Commands
npx langsmith traces --project your-project --limit 5 to list recent traces
Check the trace timeline for repeated node executions β€” count the loop iterations
Fix NowSet maxIterations: 5 and add a revision_count to the state that auto-approves after 3 loops
🟑State lost between executions
Immediate ActionVerify Supabase checkpointer is saving state after each node
Commands
SELECT * FROM langgraph_checkpoints ORDER BY created_at DESC LIMIT 5 to check recent state saves
Verify the state schema matches what the checkpointer serializes β€” check for custom types
Fix NowAdd logging in the checkpointer's put() method to confirm state is being persisted
🟑Token budget exceeded mid-execution
Immediate ActionAdd cumulative token tracking to the graph state
Commands
grep -rn 'maxTokens\|token' app/api/agent/ to find token configuration
Check LangSmith trace for total token count per execution
Fix NowAdd tokenBudget to graph state, decrement in each node, force-approve when budget hits zero
🟑Human-in-the-loop node never resumes after approval
Immediate ActionCheck that the resume signal targets the correct interrupt node
Commands
SELECT * FROM langgraph_checkpoints WHERE thread_id = '<thread_id>' ORDER BY created_at DESC LIMIT 1 for pending interrupts
Verify the resume signal: graph.updateState(config, { approved: true }, 'review_node')
Fix NowEnsure the resume call targets the correct node name and includes all required state updates
🟑Client disconnects but graph keeps running
Immediate ActionAdd AbortSignal handling to the stream consumer
Commands
Check server logs for 'client disconnected' events and orphaned graph executions
Implement cancellation: if (await abortSignal.aborted) { await graph.cancel(config) }
Fix NowPass an AbortSignal to graph.stream() and call graph.cancel() when the client disconnects
Production IncidentTwo agents enter an infinite debate loop β€” 47,000 tokens burned in 8 minutesA multi-agent research system had a 'researcher' agent and a 'critic' agent. The critic reviewed the researcher's output and either approved it or sent it back with feedback. A poorly worded system prompt caused the critic to reject every output, and the researcher to rewrite the same content with minor wording changes. The cycle ran 23 times before the token budget was exhausted.
SymptomOpenAI dashboard showed 47,000 tokens consumed in 8 minutes for a single user request. The user saw no output β€” the graph was still cycling. LangSmith traces showed 23 iterations of the same researcher->critic->researcher loop with near-identical outputs. Each iteration cost approximately 2,000 tokens (1,000 prompt + 1,000 completion).
AssumptionThey assumed the critic agent would approve output after 1-2 rounds of feedback. They did not set a maximum iteration limit on the graph, and the conditional edge that routed from critic back to researcher had no termination condition beyond 'the critic is satisfied.' The critic's system prompt included 'be thorough and critical' β€” which it interpreted as 'always find something wrong.'
Root causeThree compounding factors: (1) no maxIterations on the graph β€” LangGraph does not cap cycles by default; (2) the critic's system prompt lacked an approval threshold β€” it had no instruction to approve 'good enough' output; (3) the conditional edge used a boolean 'approved' flag that the critic never set to true because its prompt always found issues. The graph was technically correct β€” each iteration was valid β€” but the emergent behavior was an infinite loop.
FixAdded three guards: (1) set maxIterations: 5 on the graph compilation β€” hard cap on total node executions; (2) rewrote the critic prompt with an explicit approval condition: 'If the output meets 80% of the requirements, approve it. Do not reject for minor wording issues.'; (3) added a token budget check in the graph state β€” if cumulative tokens exceed 10,000, force-approve and return the best output so far. Added a 'revision_count' to the state that increments on each loop and triggers auto-approval at 3 revisions.
Key Lesson
Always set maxIterations on LangGraph compilations β€” unbounded cycles burn tokens and produce no outputCritic/evaluator agents need explicit approval thresholds β€” 'be critical' without a threshold means 'always reject'Track cumulative tokens in the graph state β€” force-approve when the budget is exhausted instead of silently failingAdd a revision_count to loop states β€” auto-approve after N iterations to prevent infinite debate between agents
Production Debug GuideThe team assumed that the critic agent would
Graph execution hangs with no output and no error→Check LangSmith traces — look for an infinite loop between two nodes. Add maxIterations to graph.compile() and log the iteration execution count in each node.
Agent calls the wrong tool or uses tools in the wrong order→Review the agent's system prompt — tool selection is prompt-driven. Add explicit tool descriptions and 'when to use' instructions. Consider splitting the agent into two: one for planning, one for execution.
State is lost between graph executions (conversation resets)β†’Verify that Supabase is persisting the graph state after each node execution. Check that the checkpointer is configured with the correct table and that state serialization handles all custom types.
Human-in-the-loop node never resumes after approval→Check that the interrupt is using the correct node name and that the resume signal includes the required state update. Verify that the graph's checkpointer has the interrupted state saved.
Parallel agent execution produces race conditions on shared state→LangGraph executes nodes sequentially by default — parallel execution requires explicit Send() map-reduce patterns. If you see race conditions, you likely have shared mutable state without proper locking.
Graph compiles but produces empty or malformed output→Check the final node's return value — it must match the graph's state schema. Add logging to every node's output to trace where the state becomes empty.
Client disconnects mid-graph execution but graph continues running→Implement AbortSignal handling in the stream consumer. Check controller.shouldClose() and cancel the graph execution if the client disconnects. Use a background job (Inngest/Qstash) for long-running graphs that exceed serverless timeouts.

Single-agent AI systems hit a ceiling fast. One agent with access to 15 tools and a 4,000-token system prompt produces inconsistent results β€” it confuses tool selection, loses context on long tasks, and cannot self-correct when it makes a mistake. The fix is not more tools or longer prompts. The fix is decomposition: split the task across specialized agents, each with a narrow responsibility and a clear handoff protocol.

LangGraph provides the orchestration layer. It models agent workflows as directed graphs β€” nodes execute functions (agent calls, tool execution, human review), edges route based on conditional logic (was the output good enough? did the agent request a tool?), and state flows through the graph carrying conversation history, tool outputs, and intermediate results. This graph model enables patterns that single-agent systems cannot achieve: retry loops, parallel execution, human approval gates, and graceful degradation when one agent fails.

The production stack for this article: Next.js 16 as the application framework, LangGraph for agent orchestration, Supabase for state persistence, and the Vercel AI SDK for streaming the graph execution to the client. The patterns apply to any LLM provider β€” OpenAI, Anthropic, or self-hosted models.

LangGraph Fundamentals: Nodes, Edges, and State

LangGraph models agent workflows as a directed graph. Three primitives compose every graph: nodes, edges, and state. Nodes are functions that execute a step β€” call an LLM, run a tool, wait for human input, or transform data. Edges define the routing logic β€” conditional branches based on the output of the previous node. State is the data that flows through the graph β€” conversation history, tool outputs, intermediate reasoning, and metadata.

The graph is compiled into a runnable that accepts input and produces output. The compilation step validates the graph structure β€” all nodes are reachable, all edges have valid targets, and the state schema is consistent. Compilation also accepts a checkpointer that persists state after each node execution, enabling pause/resume, human-in-the-loop, and crash recovery.

The key insight: LangGraph is not an agent framework β€” it is a workflow engine. It does not define how an agent thinks. It defines the order, conditions, and data flow between steps. You bring the agents (functions that call LLMs), the tools (functions that do work), and the routing logic (conditional edges). LangGraph orchestrates them.

State management is the hardest part. The state object must be serializable (for checkpointing), typed (for correctness), and minimal (for token efficiency). Store only what the graph needs to make routing decisions and what agents need as context. Do not dump the entire conversation history into every node β€” pass only the relevant slice.

io/thecodeforge/multi-agent/lib/graphs/research-graph.ts Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778798081828384858687888990919293949596979899100101102103104105106107108109110111112113114115116117118119120121122
import { StateGraph, Annotation, START, END } from '@langchain/langgraph';
import { ChatOpenAI } from '@langchain/openai';
import { HumanMessage, SystemMessage } from '@langchain/core/messages';
import { createClient } from '@supabase/supabase-js';
import { SupabaseSaver } from './checkpointers/supabase';

// Define the graph state β€” only what the graph needs
// Minimal state = fewer tokens = lower cost = faster execution
const GraphState = Annotation.Root({
  // Input
  query: Annotation<string>,

  // Agent outputs
  research: Annotation<string>,
  analysis: Annotation<string>,
  finalReport: Annotation<string>,

  // Control flow
  approved: Annotation<boolean>,
  revisionCount: Annotation<number>,
  tokenBudget: Annotation<number>,
  currentStep: Annotation<string>,

  // Metadata
  errors: Annotation<string[]>,
  startTime: Annotation<number>,
});

// Node: Research agent β€” gathers information
async function researchNode(state: typeof GraphState.State) {
  const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0 });

  const response = await llm.invoke([
    new SystemMessage('You are a research agent. Gather relevant information for the query. Be concise and factual.'),
    new HumanMessage(`Research the following topic: ${state.query}`),
  ]);

  return {
    research: response.content as string,
    currentStep: 'research_complete',
    tokenBudget: state.tokenBudget - (response.usage_metadata?.total_tokens ?? 0),
  };
}

// Node: Analysis agent β€” processes research into insights
async function analysisNode(state: typeof GraphState.State) {
  const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0 });

  const response = await llm.invoke([
    new SystemMessage('You are an analysis agent. Extract key insights from the research. Identify patterns, contradictions, and gaps.'),
    new HumanMessage(`Analyze this research:\n\n${state.research}`),
  ]);

  return {
    analysis: response.content as string,
    currentStep: 'analysis_complete',
    tokenBudget: state.tokenBudget - (response.usage_metadata?.total_tokens ?? 0),
  };
}

// Node: Report writer β€” synthesizes analysis into a report
async function writerNode(state: typeof GraphState.State) {
  const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0.3 });

  const response = await llm.invoke([
    new SystemMessage('You are a report writer. Synthesize the analysis into a clear, actionable report.'),
    new HumanMessage(`Write a report based on this analysis:\n\n${state.analysis}`),
  ]);

  return {
    finalReport: response.content as string,
    currentStep: 'report_complete',
    revisionCount: state.revisionCount + 1,
    tokenBudget: state.tokenBudget - (response.usage_metadata?.total_tokens ?? 0),
  };
}

// Conditional edge: route based on approval and budget
function shouldContinue(state: typeof GraphState.State): string {
  // Force-approve if budget exhausted
  if (state.tokenBudget <= 0) {
    return 'writer';
  }

  // Force-approve after 3 revisions
  if (state.revisionCount >= 3) {
    return 'writer';
  }

  // Route to writer if not approved
  if (!state.approved) {
    return 'writer';
  }

  return 'end';
}

// Build the graph
const graph = new StateGraph(GraphState)
  .addNode('researcher', researchNode)
  .addNode('analyzer', analysisNode)
  .addNode('writer', writerNode)
  .addEdge(START, 'researcher')
  .addEdge('researcher', 'analyzer')
  .addEdge('analyzer', 'writer')
  .addConditionalEdges('writer', shouldContinue, {
    writer: 'researcher',
    end: END,
  })
  .compile({
    checkpointer: new SupabaseSaver({
      client: createClient(
        process.env.SUPABASE_URL!,
        process.env.SUPABASE_SERVICE_KEY!
      ),
      tableName: 'langgraph_checkpoints',
    }),
    // Hard cap on total node executions β€” prevents infinite loops
    // Note: maxIterations is set at compile time, not runtime
  });

export { graph, GraphState };
β–Ά Output
Research graph with three agents (researcher, analyzer, writer), conditional routing, Supabase checkpointing, and token budget enforcement
Mental Model
LangGraph Mental Model
Think of LangGraph as a flowchart executor. Each box in the flowchart is a node (a function). Each arrow is an edge (routing logic). The clipboard that gets passed between boxes is the state. The checkpointer photographs the clipboard after each box β€” if the process crashes, you resume from the last photograph.
  • Nodes are functions β€” call an LLM, run a tool, wait for human input, or transform data
  • Edges are routing logic β€” conditional branches based on the output of the previous node
  • State is the data that flows through the graph β€” keep it minimal for token efficiency
  • Checkpointer persists state after each node β€” enables pause/resume, human-in-the-loop, crash recovery
  • Compilation validates the graph structure β€” all nodes reachable, all edges valid, state schema consistent
πŸ“Š Production Insight
LangGraph is a workflow engine, not an agent framework β€” it orchestrates steps, not thinking.
State management is the hardest part β€” keep it minimal, typed, and serializable.
Rule: store only what the graph needs for routing and what agents need as context β€” never dump full conversation history into every node.
🎯 Key Takeaway
LangGraph models workflows as directed graphs β€” nodes execute, edges route, state flows.
Keep state minimal β€” only routing decisions and agent context, never full conversation history.
Punchline: LangGraph is a flowchart executor β€” if you cannot draw your workflow as a flowchart, you cannot build it as a graph.
LangGraph Architecture Decisions
IfSimple sequential task (research then write)
β†’
UseLinear graph: START -> researcher -> writer -> END β€” no conditional edges needed
IfTask requires review and revision cycles
β†’
UseLoop graph: writer -> reviewer -> conditional edge -> writer or END β€” with maxIterations cap
IfMultiple independent subtasks that can run in parallel
β†’
UseMap-reduce graph: fan-out with Send() to parallel nodes, fan-in with a reducer node
IfHigh-risk action needs human approval before execution
β†’
UseInterrupt node: graph pauses, waits for resume signal with approval state update
IfState must survive across multiple user sessions
β†’
UseSupabase checkpointer β€” persists state to a database table, keyed by thread_id

Multi-Agent Architecture: Decomposition Over Monoliths

The monolith agent pattern β€” one agent with 15 tools and a 4,000-token system prompt β€” fails in production for three reasons. First, tool selection degrades as the tool count increases β€” the agent confuses similar tools and selects the wrong one. Second, context window pressure β€” the system prompt, conversation history, and tool descriptions compete for limited context. Third, debugging is opaque β€” when the monolith produces a bad output, you cannot identify which reasoning step failed.

Multi-agent architecture solves all three problems through decomposition. Each agent has a narrow responsibility: researcher (gathers data), analyzer (extracts insights), writer (produces output), reviewer (validates quality). Each agent has 2-3 tools maximum. Each agent's system prompt is 200-400 tokens focused on one task. The graph orchestrates the handoffs.

The production pattern: define agent boundaries by capability, not by data domain. A 'researcher' agent searches the web, reads documents, and extracts facts β€” regardless of whether the topic is finance, medicine, or engineering. An 'analyzer' agent identifies patterns, contradictions, and gaps β€” regardless of the data source. This separation means you can swap the researcher's tools without affecting the analyzer's logic.

The supervisor pattern is the most common multi-agent topology. A supervisor agent receives the user's request, decomposes it into subtasks, routes each subtask to the appropriate specialist agent, and synthesizes the results. The supervisor does not do the work β€” it orchestrates. This is analogous to a project manager who assigns tasks to engineers, not an engineer who does everything.

io/thecodeforge/multi-agent/lib/agents/supervisor.ts Β· TYPESCRIPT
123456789101112131415161718192021222324252627282930313233343536373839404142434445464748495051525354555657585960616263646566676869707172737475767778
import { ChatOpenAI } from '@langchain/openai';
import { SystemMessage, HumanMessage } from '@langchain/core/messages';
import { z } from 'zod';
import { tool } from '@langchain/core/tools';

// Supervisor agent: decomposes tasks and routes to specialists
// Does NOT do the work β€” orchestrates specialists

const TaskSchema = z.object({
  agent: z.enum(['researcher', 'analyzer', 'writer', 'reviewer']).describe('Which specialist agent should handle this task'),
  task: z.string().describe('Specific instruction for the specialist agent'),
  priority: z.enum(['high', 'medium', 'low']).describe('Task priority β€” high tasks run first'),
});

const DecompositionSchema = z.object({
  tasks: z.array(TaskSchema).describe('List of tasks to distribute to specialist agents'),
  reasoning: z.string().describe('Why this decomposition was chosen'),
});

export async function supervisorDecompose(query: string) {
  const llm = new ChatOpenAI({ model: 'gpt-4o', temperature: 0 });

  // Bind the structured output schema β€” forces the LLM to return valid JSON
  const structuredLlm = llm.withStructuredOutput(DecompositionSchema);

  const result = await structuredLlm.invoke([
    new SystemMessage(`You are a supervisor agent. Your job is to decompose complex tasks into subtasks and assign each to the appropriate specialist.\n\nSpecialists available:\n- researcher: Gathers information from web search, documents, and databases. Best for: factual questions, data collection, source verification.\n- analyzer: Extracts patterns, contradictions, and gaps from provided data. Best for: critical analysis, comparison, summarization.\n- writer: Produces polished output (reports, emails, code). Best for: synthesis, formatting, communication.\n- reviewer: Validates output quality, checks facts, identifies errors. Best for: quality assurance, fact-checking, compliance.\n\nRules:\n- Decompose the query into 2-5 subtasks maximum\n- Each subtask targets exactly one specialist\n- High-priority tasks run first\n- If the query is simple (one-step), assign to a single specialist`),
    new HumanMessage(`Decompose this task: ${query}`),
  ]);

  return result;
}

// Specialist agent factory β€” each agent has a narrow tool set and focused prompt
export function createSpecialistAgent(role: 'researcher' | 'analyzer' | 'writer' | 'reviewer') {
  const configs = {
    researcher: {
      systemPrompt: 'You are a research agent. You gather information from available sources. You do not analyze or write reports β€” you collect facts and return them in a structured format.',
      tools: ['web_search', 'document_reader'],
      model: 'gpt-4o',
      temperature: 0,
    },
    analyzer: {
      systemPrompt: 'You are an analysis agent. You examine provided data and extract key insights, patterns, contradictions, and gaps. You do not gather new data or write final reports.',
      tools: ['calculator', 'comparison_tool'],
      model: 'gpt-4o',
      temperature: 0,
    },
    writer: {
      systemPrompt: 'You are a writing agent. You synthesize provided analysis into clear, well-structured output. You do not gather data or perform analysis β€” you write based on what is provided.',
      tools: ['markdown_formatter'],
      model: 'gpt-4o',
      temperature: 0.3,
    },
    reviewer: {
      systemPrompt: 'You are a review agent. You validate output quality by checking facts, identifying errors, and assessing completeness. You approve or reject with specific feedback. If the output meets 80% of requirements, approve it.',
      tools: ['fact_checker'],
      model: 'gpt-4o',
      temperature: 0,
    },
  };

  const config = configs[role];
  const llm = new ChatOpenAI({
    model: config.model,
    temperature: config.temperature,
  });

  return {
    role,
    invoke: async (task: string, context: string) => {
      return llm.invoke([
        new SystemMessage(config.systemPrompt),
        new HumanMessage(`Task: ${task}\n\nContext:\n${context}`),
      ]);
    },
  };
}
β–Ά Output
Supervisor agent that decomposes tasks and routes to specialists β€” each specialist has 2-3 tools and a focused prompt
⚠ Never Give One Agent More Than 5 Tools
Tool selection accuracy degrades sharply above 5 tools. The agent confuses similar tools (web_search vs document_search) and selects the wrong one. If you need more than 5 tools, split the agent into two: a planner (selects which tool category) and an executor (runs the specific tool within that category). Each agent keeps 2-3 tools maximum.
πŸ“Š Production Insight
Monolith agents with 15+ tools fail in production β€” tool selection accuracy degrades above 5 tools.
Decompose by capability, not data domain β€” researcher, analyzer, writer, reviewer each have 2-3 tools max.
Rule: if an agent's system prompt exceeds 500 tokens, it is doing too much β€” split it into two agents.
🎯 Key Takeaway
Decompose monolith agents into specialists β€” each with 2-3 tools and a focused 200-400 token prompt.
The supervisor pattern orchestrates without doing the work β€” it routes, not executes.
Punchline: if an agent's system prompt exceeds 500 tokens, it is doing too much β€” split it into two agents with clear boundaries.
Multi-Agent Topology Decisions
IfSimple task that one agent can handle
β†’
UseSingle agent β€” no graph overhead, no orchestration complexity
IfTask requires research then writing then review
β†’
UseSequential graph: researcher -> writer -> reviewer β€” linear pipeline
IfTask requires multiple independent subtasks
β†’
UseSupervisor pattern: supervisor decomposes, routes to specialists, synthesizes results
IfTask requires iterative refinement (write, review, revise)
β†’
UseLoop graph: writer -> reviewer -> conditional edge -> writer or END β€” with maxIterations cap
IfTask has independent subtasks that can run in parallel
β†’
UseMap-reduce graph: fan-out to parallel agents, fan-in with a synthesis node

State Persistence: Supabase as the Graph Memory Layer

LangGraph's checkpointer interface persists graph state after each node execution. Without a checkpointer, state lives in memory β€” lost on restart, unavailable for multi-turn conversations, and impossible to debug after the fact. Supabase provides a Postgres-backed checkpointer that survives restarts, supports concurrent access, and enables SQL queries against historical state.

The checkpointer stores three things: the current state snapshot (serialized graph state), the write-ahead log (sequence of state updates), and the metadata (thread_id, node_name, timestamp). The thread_id is the primary key β€” it groups all state snapshots for a single conversation or workflow execution.

The production pattern: use a dedicated Supabase table for checkpoints with a composite index on (thread_id, created_at). After each node execution, the checkpointer writes the full state snapshot. On resume (human-in-the-loop, crash recovery, or multi-turn conversation), the checkpointer loads the latest snapshot for the thread_id and the graph resumes from that point.

State serialization is the hidden complexity. The graph state may contain complex types β€” Message objects, tool call results, custom classes. The checkpointer must serialize these to JSON for storage and deserialize them on load. LangChain's message serialization handles Message objects, but custom types need explicit serialization hooks. If serialization fails silently, the restored state is incomplete β€” agents receive partial context and produce incorrect outputs.

io/thecodeforge/multi-agent/lib/checkpointers/supabase.ts Β· TYPESCRIPT
1234567891011121314151617181920212223242526272829303132333435363738394041424344454647484950515253545556575859606162636465666768697071727374757677787980818283848586878889909192939495
import { BaseCheckpointSaver, Checkpoint, CheckpointMetadata } from '@langchain/langgraph';
import { SupabaseClient } from '@supabase/supabase-js';

interface SupabaseSaverConfig {
  client: SupabaseClient;
  tableName?: string;
}

// Supabase-backed checkpointer for LangGraph
// Persists graph state after each node execution β€” survives restarts
export class SupabaseSaver extends BaseCheckpointSaver {
  private client: SupabaseClient;
  private tableName: string;

  constructor(config: SupabaseSaverConfig) {
    super();
    this.client = config.client;
    this.tableName = config.tableName ?? 'langgraph_checkpoints';
  }

  // Get the latest checkpoint for a thread
  async getTuple(config: { configurable: { thread_id: string } }) {
    const { data, error } = await this.client
      .from(this.tableName)
      .select('*')
      .eq('thread_id', config.configurable.thread_id)
      .order('created_at', { ascending: false })
      .limit(1)
      .single();

    if (error || !data) {
      return undefined;
    }

    return {
      config: { configurable: { thread_id: data.thread_id, checkpoint_id: data.checkpoint_id } },
      checkpoint: JSON.parse(data.state) as Checkpoint,
      metadata: JSON.parse(data.metadata) as CheckpointMetadata,
      parentConfig: data.parent_checkpoint_id
        ? { configurable: { thread_id: data.thread_id, checkpoint_id: data.parent_checkpoint_id } }
        : undefined,
    };
  }

  // List all checkpoints for a thread β€” for debugging and audit
  async *list(config: { configurable: { thread_id: string } }) {
    const { data, error } = await this.client
      .from(this.tableName)
      .select('*')
      .eq('thread_id', config.configurable.thread_id)
      .order('created_at', { ascending: false });

    if (error || !data) {
      return;
    }

    for (const row of data) {
      yield {
        config: { configurable: { thread_id: row.thread_id, checkpoint_id: row.checkpoint_id } },
        checkpoint: JSON.parse(row.state) as Checkpoint,
        metadata: JSON.parse(row.metadata) as CheckpointMetadata,
        parentConfig: row.parent_checkpoint_id
          ? { configurable: { thread_id: row.thread_id, checkpoint_id: row.parent_checkpoint_id } }
          : undefined,
      };
    }
  }

  // Save a checkpoint β€” called after each node execution
  async put(
    config: { configurable: { thread_id: string } },
    checkpoint: Checkpoint,
    metadata: CheckpointMetadata,
  ) {
    const checkpointId = checkpoint.id ?? crypto.randomUUID();

    const { error } = await this.client
      .from(this.tableName)
      .upsert({
        thread_id: config.configurable.thread_id,
        checkpoint_id: checkpointId,
        parent_checkpoint_id: config.configurable.checkpoint_id ?? null,
        state: JSON.stringify(checkpoint),
        metadata: JSON.stringify(metadata),
        created_at: new Date().toISOString(),
      });

    if (error) {
      console.error('Failed to save checkpoint:', error);
      throw new Error(`Checkpoint save failed: ${error.message}`);
    }

    return { configurable: { thread_id: config.configurable.thread_id, checkpoint_id: checkpointId } };
  }
}
β–Ά Output
Supabase checkpointer: persists graph state after each node, supports list/get/put operations, survives restarts
πŸ’‘Pro Tip: Create an Index on (thread_id, created_at)
πŸ“Š Production Insight
Without a checkpointer, graph state lives in memory β€” lost on restart, unavailable for multi-turn conversations.
Supabase provides Postgres-backed persistence β€” survives restarts, supports concurrent access, enables SQL queries.
Rule: create a composite index on (thread_id, created_at) β€” without it, state lookups are O(n) and degrade as the table grows.
🎯 Key Takeaway
Supabase checkpointer persists graph state after each node β€” survives restarts, enables multi-turn conversations.
Create a composite index on (thread_id, created_at) β€” without it, state lookups degrade to O(n).
Punchline: if your graph state lives in memory, your multi-agent system is a single-use disposable β€” persist it or lose it.
State Persistence Decisions
IfSingle-turn conversation, no crash recovery needed
β†’
UseIn-memory checkpointer β€” simplest, no database dependency
IfMulti-turn conversation that survives page refresh
β†’
UseSupabase checkpointer β€” persists state keyed by thread_id
IfHuman-in-the-loop that pauses and resumes across sessions
β†’
UseSupabase checkpointer with interrupt support β€” state saved at pause point, resumed on approval
IfNeed to debug or audit past graph executions
β†’
UseSupabase checkpointer with list() β€” query all checkpoints for a thread, reconstruct execution history

Human-in-the-Loop: Approval Gates for High-Risk Actions

Some agent actions are too risky to execute without human review. Deleting files, sending emails, making API calls with side effects, or generating legal documents β€” these need a human approval gate before execution. LangGraph's interrupt mechanism provides this: the graph pauses at a specific node, saves the state, and waits for a resume signal with the human's decision.

The pattern: the agent proposes an action, the graph interrupts and presents the proposal to the user, the user approves or rejects, and the graph resumes with the decision in the state. The conditional edge after the interrupt node routes based on the approval status β€” execute if approved, revise if rejected, terminate if the user cancels.

The production consideration: the interrupt-resume cycle must be atomic. The state at the interrupt point must be exactly what the user sees, and the resume must restore that exact state. If the state changes between interrupt and resume (e.g., another process modifies the database), the agent may execute an action based on stale context.

The UX challenge is presenting the proposal clearly. The user needs to understand what the agent wants to do, why, and what the consequences are. A raw JSON dump of the proposed action is not sufficient. The agent should generate a human-readable summary of the proposed action, and the UI should present it with approve/reject buttons and an optional feedback field for rejections.

io/thecodeforge/multi-agent/components/human-approval.tsx Β· TSX
1
'use client';\n\nimport { useState } from 'react';\nimport { Button } from '@/components/ui/button';\nimport { Card, CardContent, CardDescription, CardHeader, CardTitle } from '@/components/ui/card';\nimport { Textarea } from '@/components/ui/textarea';\n\ninterface ApprovalRequest {\n  threadId: string;\n  nodeName: string;\n  proposal: string;\n  riskLevel: 'low' | 'medium' | 'high';\n  actionType: string;\n  details: Record<string, unknown>;\n}\n\ninterface HumanApprovalProps {\n  request: ApprovalRequest;\n  onApprove: (threadId: string, feedback?: string) => Promise<void>;\n  onReject: (threadId: string, feedback: string) => Promise<void>;\n}\n\n// Human-in-the-loop approval UI\n// The graph pauses at an interrupt node and waits for this component to send a resume signal\nexport function HumanApproval({ request, onApprove, onReject }: HumanApprovalProps) {\n  const [feedback, setFeedback] = useState('');\n  const [isSubmitting, setIsSubmitting] = useState(false);\n\n  const riskColors = {\n    low: 'bg-green-500/10 text-green-700 border-green-500/20',\n    medium: 'bg-yellow-500/10 text-yellow-700 border-yellow-500/20',\n    high: 'bg-red-500/10 text-red-700 border-red-500/20',\n  };\n\n  const handleApprove = async () => {\n    setIsSubmitting(true);\n    try {\n      await onApprove(request.threadId, feedback || undefined);\n    } finally {\n      setIsSubmitting(false);\n    }\n  };\n\n  const handleReject = async () => {\n    if (!feedback.trim()) {\n      return; // Rejection requires feedback β€” the agent needs to know why\n    }\n    setIsSubmitting(true);\n    try {\n      await onReject(request.threadId, feedback);\n    } finally {\n      setIsSubmitting(false);\n    }\n  };\n\n  return (\n    <Card className="border-l-4 border-l-yellow-500">\n      <CardHeader>\n        <div className="flex items-center justify-between">\n          <CardTitle className="text-lg">Approval Required</CardTitle>\n          <span className={`rounded-full px-3 py-1 text-xs font-medium border ${riskColors[request.riskLevel]}`}>\n            {request.riskLevel.toUpperCase()} RISK\n          </span>\n        </div>\n        <CardDescription>\n          The agent wants to perform: <strong>{request.actionType}</strong>\n        </CardDescription>\n      </CardHeader>\n      <CardContent className="space-y-4">\n        {/* Human-readable proposal β€” not raw JSON */}\n        <div className="rounded-md bg-muted p-4">\n          <p className="text-sm whitespace-pre-wrap">{request.proposal}</p>\n        </div>\n\n        {/* Feedback field β€” required for rejection, optional for approval */}\n        <div className="space-y-2">\n          <label className="text-sm font-medium">Feedback (required for rejection)</label>\n          <Textarea\n            value={feedback}\n            onChange={(e) => setFeedback(e.target.value)}\n            placeholder="Explain why you are rejecting or provide additional context..."\n            rows={3}\n          />\n        </div>\n\n        {/* Action buttons */}\n        <div className="flex gap-3 justify-end">\n          <Button\n            variant="outline"\n            onClick={handleReject}\n            disabled={isSubmitting || !feedback.trim()}\n          >\n            Reject\n          </Button>\n          <Button\n            onClick={handleApprove}\n            disabled={isSubmitting}\n          >\n            Approve\n          </Button>\n        </div>\n      </CardContent>\n    </Card>\n  );\n}
β–Ά Output
Human-in-the-loop approval UI: risk level indicator, human-readable proposal, feedback field, approve/reject buttons
Mental Model
Interrupt-Resume Mental Model
Think of the interrupt node like a checkpoint at airport security. The agent (traveler) reaches the checkpoint with a proposal (luggage). The graph pauses. The human (security officer) inspects the proposal and either waves the agent through (approve) or sends it back for inspection (reject). The graph resumes from the exact checkpoint β€” nothing changes between pause and resume.
  • Interrupt node pauses the graph and saves the state β€” the agent's proposal is frozen in time
  • Human reviews the proposal in the UI β€” approve or reject with feedback
  • Resume signal carries the decision back to the graph β€” conditional edge routes based on approval
  • The state between interrupt and resume must be atomic β€” no external modifications
  • Rejection feedback goes back to the agent as context β€” it revises and proposes again
πŸ“Š Production Insight
Human-in-the-loop pauses the graph at a specific node β€” state is frozen, proposal is presented to the user.
The interrupt-resume cycle must be atomic β€” state changes between pause and resume cause stale context execution.
Rule: rejection requires feedback β€” the agent needs to know why it was rejected to revise correctly.
🎯 Key Takeaway
Human-in-the-loop pauses the graph at an interrupt node β€” the agent proposes, the human decides, the graph resumes.
Rejection requires feedback β€” the agent needs to know why to revise correctly.
Punchline: if your agent can delete data, send emails, or make payments without human review, you do not have a production system β€” you have a liability.
Human-in-the-Loop Decisions
IfAgent proposes a read-only action (search, summarize)
β†’
UseNo approval needed β€” execute directly, no interrupt node
IfAgent proposes a mutation with minor impact (draft email, update record)
β†’
UseLow-risk approval β€” interrupt with auto-approve after 30 seconds timeout
IfAgent proposes a high-impact action (delete data, send email, make payment)
β†’
UseHigh-risk approval β€” interrupt, wait for explicit human approval, no auto-approve
IfAgent proposes an action the user previously rejected
β†’
UseShow rejection history in the UI β€” user sees what was rejected and why before deciding again

Streaming Graph Execution to the Client

Multi-agent graph execution can take 10-60 seconds β€” multiple LLM calls, tool executions, and conditional routing add up. Without streaming, the user sees a blank screen for the entire duration. With streaming, the user sees each node's output as it executes: the researcher's findings appear first, then the analyzer's insights, then the writer's report.

LangGraph supports streaming via the graph.stream() method, which yields events as each node completes. Each event contains the node name, the state update, and the metadata. The Next.js Route Handler pipes these events to the client via a ReadableStream, and the client renders them token-by-token.

The production pattern: stream three levels of information. Level 1: node status β€” which agent is currently executing (show a status indicator: 'Researching...', 'Analyzing...', 'Writing...'). Level 2: node output β€” the agent's response as it generates (stream tokens from the LLM call). Level 3: graph metadata β€” iteration count, token usage, and routing decisions (for debugging dashboards).

The UX consideration: do not show raw graph events to users. Transform them into a conversation-like interface where each agent's contribution appears as a message. The user sees a coherent narrative, not a debugging log.

Critical production concern: client disconnections. If the user navigates away or refreshes the page mid-execution, the graph may continue running server-side, consuming tokens without a client to receive the output. Implement AbortSignal handling to cancel graph execution when the client disconnects.

io/thecodeforge/multi-agent/app/api/agent/route.ts Β· TYPESCRIPT
1
import { NextRequest } from 'next/server';\nimport { graph } from '@/io/thecodeforge/multi-agent/lib/graphs/research-graph';\nimport { HumanMessage } from '@langchain/core/messages';\n\n// Route Handler: streams graph execution events to the client\n// Each node's output appears as it completes β€” no blank screen\nexport async function POST(req: NextRequest) {\n  const { query, threadId } = await req.json();\n\n  if (!query || !threadId) {\n    return Response.json({ error: 'query and threadId are required' }, { status: 400 });\n  }\n\n  const encoder = new TextEncoder();\n\n  const stream = new ReadableStream({\n    async start(controller) {\n      try {\n        // Stream graph execution β€” yields events as each node completes\n        const graphStream = graph.stream(\n          {\n            query,\n            tokenBudget: 10000,\n            revisionCount: 0,\n            approved: false,\n            errors: [],\n            startTime: Date.now(),\n          },\n          {\n            configurable: { thread_id: threadId },\n            // Stream mode: 'updates' yields state updates per node\n            streamMode: 'updates',\n          },\n        );\n\n        for await (const event of graphStream) {\n          // Check if client disconnected\n          // Note: In production, pass AbortSignal from the request\n          // and check controller.shouldClose() or abortSignal.aborted\n          \n          // Each event is { nodeName: stateUpdate }\n          for (const [nodeName, stateUpdate] of Object.entries(event)) {\n            const data = JSON.stringify({\n              type: 'node_update',\n              node: nodeName,\n              state: stateUpdate,\n              timestamp: Date.now(),\n            });\n\n            controller.enqueue(encoder.encode(`data: ${data}\n\n`));\n          }\n        }\n\n        // Stream complete\n        controller.enqueue(encoder.encode(`data: ${JSON.stringify({ type: 'done' })}\n\n`));\n        controller.close();\n      } catch (error) {\n        const errorMessage = error instanceof Error ? error.message : 'Unknown error';\n        controller.enqueue(\n          encoder.encode(`data: ${JSON.stringify({ type: 'error', message: errorMessage })}\n\n`)\n        );\n        controller.close();\n      }\n    },\n  });\n\n  return new Response(stream, {\n    headers: {\n      'Content-Type': 'text/event-stream',\n      'Cache-Control': 'no-cache',\n      'Connection': 'keep-alive',\n    },\n  });\n}
β–Ά Output
Route Handler streams graph execution events via SSE β€” each node's output appears as it completes
πŸ’‘Pro Tip: Stream Node Status Before Node Output
πŸ“Š Production Insight
Graph execution takes 10-60 seconds β€” without streaming, users see a blank screen and abandon.
Stream three levels: node status (which agent is active), node output (tokens as they generate), graph metadata (iteration count, token usage).
Rule: send a node_started event before execution β€” show a status indicator immediately, not after the first token.
Critical: Handle client disconnections with AbortSignal β€” cancel graph execution to prevent orphaned runs.
🎯 Key Takeaway
Stream graph execution in three levels: node status, node output, graph metadata.
Send node_started events before execution β€” show status indicators immediately, not after the first token.
Handle client disconnections β€” use AbortSignal or background jobs to prevent orphaned executions.
Punchline: if your multi-agent system takes 30 seconds and shows nothing, users assume it is broken β€” stream or lose them.
Streaming Implementation Decisions
IfSimple single-agent response
β†’
UseStream LLM tokens directly β€” no graph events needed
IfMulti-agent graph with sequential nodes
β†’
UseStream node status + node output β€” user sees each agent's contribution as it completes
IfMulti-agent graph with parallel nodes
β†’
UseStream node status for each parallel branch β€” show progress indicators for all active agents
IfHuman-in-the-loop node in the graph
β†’
UseStream the proposal, then pause the stream β€” resume streaming when the human approves or rejects
IfGraph exceeds serverless timeout (60s+)
β†’
UseUse background job (Inngest/Qstash) β€” trigger graph via webhook, receive callback when complete

Deployment and Observability: LangSmith Tracing in Production

Multi-agent systems are harder to debug than single-agent systems. When a single agent produces bad output, you review one prompt and one response. When a multi-agent graph produces bad output, you must trace the entire execution: which agent was called, in what order, what each agent received, what each agent produced, and where the routing logic sent the output next.

LangSmith provides distributed tracing for LangGraph executions. Each graph run produces a trace with a tree of spans β€” one span per node, one span per LLM call, one span per tool execution. The trace shows the full execution path, the state at each node, the token usage, and the latency. This is essential for debugging production failures.

The production pattern: enable LangSmith tracing in the graph's configuration. Each trace is tagged with metadata β€” user_id, thread_id, graph_name, and environment (staging/production). Use the LangSmith dashboard to filter traces by tag, search for specific node outputs, and compare successful runs against failed runs.

The observability budget matters. LangSmith charges per trace. A multi-agent graph with 5 nodes and 2 retry loops produces 10+ spans per execution. At 1,000 daily executions, that is 10,000+ spans per day. Sample traces in production β€” log 100% in staging, 10% in production, and 100% of error traces.

Cold start considerations for Vercel serverless. Each graph execution is a serverless function invocation. Cold starts add 1-3 seconds to the first node execution. For graphs that exceed the serverless timeout (300 seconds on Pro), use a background job pattern with webhook callbacks.

io/thecodeforge/multi-agent/lib/observability/tracing.ts Β· TYPESCRIPT
1
import { Client } from 'langsmith';\n\n// LangSmith tracing configuration for production multi-agent systems\n// Enable tracing, tag with metadata, sample for cost control\n\nexport function createTracingConfig(options: {\n  userId: string;\n  threadId: string;\n  graphName: string;\n  environment: 'staging' | 'production';\n}) {\n  const isProduction = options.environment === 'production';\n\n  // Sample rate: 100% in staging, 10% in production\n  // Error traces are always logged (handled in the graph's error handler)\n  const shouldTrace = !isProduction || Math.random() < 0.1;\n\n  if (!shouldTrace) {\n    return { tracingEnabled: false };\n  }\n\n  return {\n    tracingEnabled: true,\n    // LangSmith callbacks are configured via environment variables\n    // LANGCHAIN_TRACING_V2=true\n    // LANGCHAIN_API_KEY=...\n    // LANGCHAIN_PROJECT=your-project-name\n    callbacks: [\n      // Metadata tags for filtering in the LangSmith dashboard\n      {\n        handleLLMStart: async (llm: unknown, prompts: string[], runId: string) => {\n          // Tags are set at the run level, not per-span\n          // Use the LangSmith client to update the run with metadata\n        },\n      },\n    ],\n    metadata: {\n      user_id: options.userId,\n      thread_id: options.threadId,\n      graph_name: options.graphName,\n      environment: options.environment,\n      // Custom tags for filtering\n      tags: [\n        options.graphName,\n        options.environment,\n        `user:${options.userId}`,\n      ],\n    },\n  };\n}\n\n// Error trace logger β€” always logs 100% of errors regardless of sample rate\nexport async function logErrorTrace(\n  client: Client,\n  error: Error,\n  context: {\n    userId: string;\n    threadId: string;\n    graphName: string;\n    nodeName: string;\n    state: Record<string, unknown>;\n  },\n) {\n  await client.createRun({\n    name: `error:${context.graphName}:${context.nodeName}`,\n    runType: 'chain',\n    inputs: {\n      error: error.message,\n      stack: error.stack,\n      state: context.state,\n    },\n    tags: ['error', context.graphName, context.nodeName],\n    metadata: {\n      user_id: context.userId,\n      thread_id: context.threadId,\n      graph_name: context.graphName,\n      node_name: context.nodeName,\n      environment: process.env.NODE_ENV,\n    },\n  });\n}
β–Ά Output
LangSmith tracing: metadata tags, sample rates (10% production), error trace logging (100%), and cost control
⚠ LangSmith Traces Are Not Free β€” Sample in Production
Each LangSmith trace costs money based on the number of spans. A multi-agent graph with 5 nodes and retry loops produces 10+ spans per execution. At 1,000 daily executions, that is 10,000+ spans per day. Sample at 10% in production β€” log 100% of error traces, 100% in staging, and 10% of successful production traces.
πŸ“Š Production Insight
Multi-agent debugging requires distributed tracing β€” one bad output means tracing 5+ nodes, 10+ spans.
LangSmith charges per trace β€” sample at 10% in production, log 100% of errors.
Rule: tag every trace with user_id, thread_id, graph_name, and environment β€” filtering without tags is impossible at scale.
Vercel cold starts add 1-3s to first node β€” use background jobs for graphs exceeding serverless timeout.
🎯 Key Takeaway
LangSmith provides distributed tracing for multi-agent graphs β€” essential for debugging production failures.
Sample traces in production (10%) but log 100% of errors β€” balance cost and visibility.
Tag traces with user_id, thread_id, graph_name, environment β€” filtering is impossible without tags.
Punchline: if you cannot trace which agent did what in what order, you cannot debug your multi-agent system β€” tracing is not optional.
Observability Decisions
IfDevelopment and staging environments
β†’
Use100% trace logging β€” full visibility for debugging, cost is not a concern
IfProduction with low traffic (<100 executions/day)
β†’
Use100% trace logging β€” cost is manageable, full visibility needed
IfProduction with high traffic (>1,000 executions/day)
β†’
Use10% sample rate + 100% error traces β€” balance cost and visibility
IfNeed to debug a specific user's bad output
β†’
UseFilter LangSmith by thread_id β€” find the exact trace for that execution
IfGraph execution exceeds serverless timeout
β†’
UseUse Inngest/Qstash background jobs β€” trigger via webhook, callback on completion

Testing Multi-Agent Graphs

Testing multi-agent systems requires a different strategy than single-agent tests. You need to verify the graph structure, state transitions, loop termination, and end-to-end behavior. Three testing layers address different failure modes.

Unit tests: test individual nodes in isolation. Mock the LLM client and verify that the node transforms input state to output state correctly. Use tools like Jest to assert that researchNode returns the expected keys (research, currentStep, tokenBudget) based on a given input state.

Integration tests: test state transitions and routing. Run the graph with a fixed thread_id and verify that conditional edges route correctly. Test loop termination by setting maxIterations to a low value (e.g., 2) and asserting that the graph terminates. Use a test Supabase database with seeded state.

End-to-end tests: test the full user journey. Simulate a user request end-to-end and assert on the final state. Use LangSmith mock clients to record traces for debugging. Verify that the final output contains expected content and that the token budget was not exceeded.

Visual testing: verify graph structure. Use graph.getGraph().drawMermaidPng() to generate a visualization of the graph and assert that it matches the expected topology. This catches structural bugs like missing edges or unreachable nodes.

io/thecodeforge/multi-agent/lib/graphs/research-graph.test.ts Β· TYPESCRIPT
12345678
import { describe, it, expect, beforeEach, vi } from 'vitest';\nimport { graph, GraphState } from './research-graph';\n\n// Mock the LLM to return predictable output\nvi.mock('@langchain/openai', () => ({
  ChatOpenAI: vi.fn().mockImplementation(() => ({
    invoke: vi.fn().mockResolvedValue({
      content: 'Mocked research output',
      usage_metadata: { total_tokens: 100 },
    }),
  })),\n}));\n\ndescribe('Research Graph', () => {\n  const threadId = 'test-thread-123';\n\n  it('should execute the full graph and produce a final report', async () => {\n    const initialState = {\n      query: 'What is LangGraph?',\n      tokenBudget: 10000,\n      revisionCount: 0,\n      approved: false,\
      errors: [],\n      startTime: Date.now(),\n    };\n\n    const result = await graph.invoke(initialState, {\n      configurable: { thread_id: threadId },\n    });\n\n    expect(result.finalReport).toBeDefined();\n    expect(result.currentStep).toBe('report_complete');\n  });\n\n  it('should terminate after maxIterations to prevent infinite loops', async () => {\n    const initialState = {\n      query: 'Test query',\n      tokenBudget: 10000,\n      revisionCount: 0,\n      approved: false, // Always triggers revision loop\n      errors: [],\n      startTime: Date.now(),\n    };\n\n    // Run with very low maxIterations to test termination\n    // In real tests, compile the graph with maxIterations: 3\n    // Here we just verify the graph eventually terminates\n    let iterations = 0;\n    for await (const _ of graph.stream(initialState, {\n      configurable: { thread_id: `${threadId}-loop-test` },\n    })) {\n      iterations++;\n      if (iterations > 20) {\n        throw new Error('Graph did not terminate β€” infinite loop detected');\n      }\n    }\n    \n    expect(iterations).toBeLessThanOrEqual(20);\n  });\n\n  it('should persist state to Supabase after each node', async () => {\n    // This test requires a test Supabase instance\n    // Verify that after each node execution, a checkpoint is created\n    const initialState = {\n      query: 'State persistence test',\n      tokenBudget: 5000,\n      revisionCount: 0,\n      approved: true, // Skip revision loop\n      errors: [],\n      startTime: Date.now(),\n    };\n\n    await graph.invoke(initialState, {\n      configurable: { thread_id: `${threadId}-checkpoint-test` },\n    });\n\n    // Query Supabase to verify checkpoints were saved\n    // const checkpoints = await supabase.from('langgraph_checkpoints')...\n    // expect(checkpoints.length).toBeGreaterThan(0);\n  });\n});
β–Ά Output
Test suite: unit tests for node logic, integration tests for graph termination, checkpoint verification
πŸ’‘Pro Tip: Test Loop Termination with Low maxIterations
πŸ“Š Production Insight
Test multi-agent graphs at three layers: unit (nodes), integration (routing), end-to-end (full journey).
Use maxIterations in tests to verify loop termination β€” never trust a graph that hasn't been proven to terminate.
Visual testing with getGraph().drawMermaidPng() catches structural bugs before runtime.
🎯 Key Takeaway
Test multi-agent graphs at three layers: unit tests for node logic, integration tests for routing, end-to-end for full flows.
Always test loop termination β€” compile with maxIterations: 2 and assert clean exit.
Use graph visualization to catch structural bugs β€” missing edges, unreachable nodes, wrong topology.
Punchline: if your graph hasn't been tested for termination, it will eventually run forever in production.
πŸ—‚ Single-Agent vs Multi-Agent Architecture
When to decompose and when to keep it simple
AspectSingle AgentMulti-Agent (LangGraph)
Tool count1-5 tools β€” manageable selection accuracy2-3 tools per agent β€” each specialist has a narrow tool set
System prompt size200-500 tokens β€” focused on one task200-400 tokens per agent β€” focused prompts, total context shared across agents
DebuggingOne prompt, one response β€” easy to traceDistributed trace across nodes β€” requires LangSmith or equivalent
Self-correctionAgent may retry but has no structured revision loopReviewer agent + conditional edge enables structured revision cycles
Human oversightDifficult to gate specific actionsInterrupt nodes pause at specific points β€” targeted approval gates
CostOne LLM call per requestMultiple LLM calls per request β€” 3-10x token usage
Latency5-15 seconds for single response15-60 seconds for full graph execution
Best forSimple Q&A, single-step tasks, chatbotsResearch pipelines, content workflows, multi-step analysis, code generation with review

🎯 Key Takeaways

  • LangGraph models agent workflows as directed graphs β€” nodes execute functions, edges route conditionally, state flows between them. If you cannot draw it as a flowchart, you cannot build it as a graph.
  • Decompose monolith agents into specialists β€” each with 2-3 tools and a focused 200-400 token prompt. If an agent's system prompt exceeds 500 tokens, it is doing too much.
  • Always set maxIterations on graph compilation β€” unbounded cycles burn tokens and produce no output. Add revision_count and token_budget to the state for additional guards.
  • Supabase checkpointer persists graph state after each node β€” survives restarts, enables multi-turn conversations, and supports debugging via SQL queries.
  • Human-in-the-loop nodes pause the graph for high-risk actions β€” rejection requires feedback so the agent can revise. If your agent can delete data without approval, you have a liability.
  • Stream graph execution in three levels: node status, node output, graph metadata. If your system takes 30 seconds and shows nothing, users assume it is broken.
  • Handle client disconnections β€” use AbortSignal to cancel orphaned executions. Use background jobs for graphs exceeding serverless timeouts.

⚠ Common Mistakes to Avoid

    βœ•Building one mega-agent with 15+ tools instead of decomposing into specialists
    Symptom

    Tool selection accuracy drops below 60% β€” the agent confuses similar tools (web_search vs document_search) and selects the wrong one. System prompt exceeds 2,000 tokens, consuming context window that should be used for conversation history.

    Fix

    Decompose into specialist agents with 2-3 tools each. Use a supervisor agent to route tasks. If an agent's system prompt exceeds 500 tokens, it is doing too much β€” split it into two agents with clear boundaries.

    βœ•Not setting maxIterations on the graph compilation
    Symptom

    Two agents enter an infinite debate loop β€” a researcher and a critic cycle endlessly because the critic always finds issues. Token budget exhausted in minutes with no output produced.

    Fix

    Set maxIterations on graph.compile() β€” hard cap on total node executions. Add a revision_count to the state that auto-approves after 3 iterations. Add a token budget check that force-terminates when exceeded.

    βœ•Storing full conversation history in the graph state
    Symptom

    State serialization takes 500ms+ per checkpoint. Supabase writes fail intermittently because the state JSON exceeds the row size limit. Token usage doubles because every node receives the full history as context.

    Fix

    Store only what the graph needs for routing and what agents need as context. Conversation history lives in the checkpointer's write-ahead log, not in the state object. Pass only the relevant slice to each agent.

    βœ•Executing high-risk actions without a human approval gate
    Symptom

    An agent deletes a production database record based on a misinterpreted user request. No approval step, no undo, no audit trail. The action is irreversible.

    Fix

    Add interrupt nodes before any high-risk action (delete, send, pay). The graph pauses, presents the proposal to the user, and resumes only after explicit approval. Rejection requires feedback so the agent can revise.

    βœ•Not enabling LangSmith tracing in production
    Symptom

    A user reports that the agent produced a factually incorrect report. The team cannot reproduce the issue because they have no trace of which agents were called, in what order, or what each agent produced.

    Fix

    Enable LangSmith tracing with metadata tags (user_id, thread_id, graph_name, environment). Sample at 10% in production, log 100% of errors. Filter by thread_id to find the exact execution trace.

    βœ•Using a linear graph when the task requires conditional routing
    Symptom

    Every request goes through all agents in the same order β€” even simple questions that only need one agent waste tokens on unnecessary analysis and writing steps.

    Fix

    Add a supervisor agent that decomposes the task and routes to only the needed specialists. Add conditional edges that skip unnecessary nodes based on the task complexity.

    βœ•Not handling client disconnections during graph execution
    Symptom

    User navigates away or refreshes the page mid-execution. The graph continues running server-side, consuming tokens with no client to receive the output. Orphaned executions pile up.

    Fix

    Implement AbortSignal handling in the stream consumer. Pass the request's AbortSignal to graph.stream() and check abortSignal.aborted. Cancel graph execution when the client disconnects. For long-running graphs, use background jobs (Inngest/Qstash) instead of serverless.

    βœ•Not testing loop termination conditions
    Symptom

    The graph compiles and appears to work in development, but production workloads trigger edge cases (e.g., critic always rejects) that cause infinite loops. The first sign is a spike in token usage.

    Fix

    Write integration tests that compile the graph with maxIterations: 2 and verify it terminates cleanly. Test adverse conditions: always-reject critic, empty tool outputs, timeout mid-execution.

Interview Questions on This Topic

  • QExplain the difference between a single-agent system and a multi-agent system. When would you choose one over the other?Mid-levelReveal
    A single-agent system uses one LLM with a set of tools to handle a task. It is simpler, faster (one LLM call), and cheaper (fewer tokens). It works well for simple Q&A, single-step tasks, and chatbots with 1-5 tools. A multi-agent system decomposes a complex task across specialized agents β€” researcher, analyzer, writer, reviewer β€” each with a narrow tool set and focused prompt. It is more reliable for multi-step workflows, enables structured revision cycles, and supports human approval gates. Choose single-agent when the task is simple and the tool count is under 5. Choose multi-agent when the task requires multiple steps, self-correction, or human oversight. The trade-off: multi-agent costs 3-10x more in tokens and takes 15-60 seconds vs 5-15 seconds for single-agent.
  • QHow does LangGraph prevent infinite loops in a multi-agent system? Walk me through the safeguards you would implement.SeniorReveal
    LangGraph does not prevent infinite loops by default β€” it is the developer's responsibility. Three safeguards: (1) maxIterations on graph.compile() β€” hard cap on total node executions, graph terminates when exceeded. (2) revision_count in the graph state β€” increments on each loop iteration, auto-approves after a threshold (e.g., 3 revisions). (3) token_budget in the graph state β€” tracks cumulative token usage, force-terminates when exceeded. Additionally, critic/evaluator agents need explicit approval thresholds in their system prompts β€” 'be critical' without a threshold means 'always reject.' Without these safeguards, two agents can debate endlessly, burning tokens with no output.
  • QWhat is the role of a checkpointer in LangGraph, and why is Supabase a good choice for production?Mid-levelReveal
    A checkpointer persists the graph state after each node execution. Without it, state lives in memory β€” lost on restart, unavailable for multi-turn conversations, and impossible to debug. The checkpointer stores the current state snapshot, the write-ahead log, and metadata (thread_id, node_name, timestamp). Supabase is a good production choice because it provides Postgres-backed persistence (survives restarts), supports concurrent access (multiple graph instances can share state), and enables SQL queries against historical state (debugging and audit). The key optimization: create a composite index on (thread_id, created_at) β€” without it, state lookups are O(n) and degrade as the table grows.
  • QHow would you implement a human-in-the-loop approval gate in a LangGraph multi-agent system?SeniorReveal
    Add an interrupt node before the high-risk action. The graph pauses at the interrupt, saves the state via the checkpointer, and sends the agent's proposal to the client. The UI presents the proposal with approve/reject buttons. Rejection requires feedback β€” the agent needs to know why to revise. When the user approves or rejects, the client sends a resume signal via graph.updateState() with the decision. The conditional edge after the interrupt routes based on the approval status β€” execute if approved, revise if rejected. The key requirement: the state between interrupt and resume must be atomic β€” no external modifications should change the context the agent based its proposal on.
  • QWhat is the supervisor pattern in multi-agent systems, and when would you use it over a sequential pipeline?Mid-levelReveal
    The supervisor pattern uses an orchestrator agent that receives the user's request, decomposes it into subtasks, routes each subtask to the appropriate specialist agent, and synthesizes the results. The supervisor does not do the work β€” it routes. Use it when the task requires different specialists depending on the request type (some requests need research + writing, others need only analysis). Use a sequential pipeline when every request goes through the same steps in the same order (research -> analyze -> write -> review). The supervisor pattern is more flexible but adds one extra LLM call for the decomposition step. The sequential pipeline is simpler and cheaper but cannot skip unnecessary steps.
  • QHow do you handle client disconnections during a long-running graph execution?SeniorReveal
    Without handling, the graph continues running server-side, consuming tokens with no client to receive output. Implement AbortSignal handling in the stream consumer: pass req.signal (Next.js) to graph.stream() and check abortSignal.aborted in the stream loop. Call graph.cancel(config) to stop execution when aborted. For graphs that exceed serverless timeouts (60-300s), use a background job pattern with Inngest or Qstash β€” trigger the graph via API, receive a webhook callback when complete.

Frequently Asked Questions

Can I use LangGraph with Anthropic Claude instead of OpenAI?

Yes. LangGraph is model-agnostic β€” it orchestrates functions, not specific LLMs. Swap ChatOpenAI for ChatAnthropic in the node functions. The graph structure, state management, and checkpointing work identically. The only difference is the LLM call itself and the response format. LangChain provides unified interfaces for both providers.

How much does a multi-agent system cost compared to a single-agent system?

A multi-agent system typically costs 3-10x more in tokens per request. A single-agent request with GPT-4o costs approximately $0.01-0.03. A multi-agent graph with 4 nodes (supervisor + 3 specialists) costs approximately $0.05-0.15 per request. The cost scales with the number of LLM calls, not the complexity of the task. Mitigate cost by: using gpt-4o-mini for non-critical agents, setting maxTokens per agent, and using conditional routing to skip unnecessary agents.

Do I need LangSmith for production, or can I use other observability tools?

LangSmith is the native tracing tool for LangChain/LangGraph β€” it provides automatic span creation for every LLM call, tool execution, and graph node. Alternatives exist (Langfuse, OpenLLMetry, custom OpenTelemetry) but require manual instrumentation. LangSmith is recommended for LangGraph projects because the integration is zero-config β€” set two environment variables and every graph execution is traced automatically.

How do I handle rate limiting across multiple agents that all call the same LLM provider?

Each agent's LLM call counts against the same provider rate limit. A graph with 4 agents makes 4 concurrent calls β€” if your rate limit is 500 RPM, the graph consumes 4 RPM per execution. Implement application-layer rate limiting with a shared token bucket (Upstash Redis) that tracks all LLM calls from all agents. Add retry logic with Retry-After header parsing on 429 responses. Consider staggering agent execution (sequential instead of parallel) if rate limits are tight.

Can I deploy a LangGraph multi-agent system to Vercel serverless?

Yes, with caveats. Each graph execution is a serverless function invocation. Set maxDuration to 60-300 seconds depending on your plan. The Supabase checkpointer persists state between invocations β€” the graph can pause (human-in-the-loop) and resume in a separate invocation. Cold starts add 1-3 seconds to the first node execution. For graphs that exceed the serverless timeout, use a background job pattern (Inngest, Qstash) with a webhook callback.

How do I test a multi-agent graph?

Test at three layers: (1) Unit tests β€” test individual nodes in isolation with mocked LLM clients, verify state transformation. (2) Integration tests β€” run the graph with a fixed thread_id, test conditional edge routing, verify loop termination with low maxIterations. (3) End-to-end tests β€” simulate full user journeys, assert on final state and output quality. Use graph.getGraph().drawMermaidPng() for visual testing β€” assert the topology matches expectations to catch missing edges or unreachable nodes.

What happens if the user disconnects mid-execution?

Without handling, the graph continues running server-side, consuming tokens with no client to receive the output. Implement AbortSignal handling: pass req.signal to graph.stream(), check abortSignal.aborted in the stream loop, and call graph.cancel(config) when aborted. For long-running graphs (>60s), use background jobs (Inngest/Qstash) instead of serverless β€” trigger via API, receive callback on completion.

πŸ”₯
Naren Founder & Author

Developer and founder of TheCodeForge. I built this site because I was tired of tutorials that explain what to type without explaining why it works. Every article here is written to make concepts actually click.

← PreviousBuilding an AI SaaS from Scratch with Next.js 16Next β†’Full-Stack Type Safety in 2026 – The Ultimate Guide
Forged with πŸ”₯ at TheCodeForge.io β€” Where Developers Are Forged