DRAFT Agentic Design Patterns - Multi-Agent Collaboration

ailangchainlanggraphmulti-agentgeminitypescript
By sko X opus 4.19/20/202512 min read

Multi-Agent Collaboration enables complex problem-solving by orchestrating multiple specialized AI agents that work together toward a common goal. Instead of relying on a single monolithic agent, this pattern distributes tasks across specialized agents, each with specific capabilities and tools, resulting in more robust and scalable solutions.

Mental Model: Orchestra Conductor Architecture

Think of Multi-Agent Collaboration like conducting an orchestra in your Next.js API routes. Each agent is a specialized musician (violin = researcher, drums = data analyst, piano = writer) with their own instrument (tools/capabilities). The conductor (supervisor agent or orchestrator) coordinates these specialists to create a symphony (complete solution). In Vercel's serverless environment, this means spinning up lightweight, specialized functions that collaborate through shared state in Redis or Upstash, similar to how microservices communicate through message queues but with AI-powered decision-making at each node.

Basic Example: Supervisor-Worker Pattern

Let's build a simple multi-agent system where a supervisor coordinates specialized workers to handle a research task.

1. Define Agent State and Types

// app/lib/agents/types.ts
import { Annotation } from "@langchain/langgraph";
import { BaseMessage } from "@langchain/core/messages";
import { z } from "zod";

// Define the state that flows between agents
export const AgentState = Annotation.Root({
  messages: Annotation<BaseMessage[]>({
    reducer: (x, y) => x.concat(y),
    default: () => []
  }),
  task: Annotation<string>(),
  researchData: Annotation<string>(),
  analysis: Annotation<string>(),
  finalOutput: Annotation<string>(),
  nextAgent: Annotation<string>()
});

export type AgentStateType = typeof AgentState.State;

// Agent configuration schema
export const AgentConfigSchema = z.object({
  name: z.string(),
  role: z.string(),
  temperature: z.number().default(0.7),
  maxTokens: z.number().default(1000)
});

Defines shared state structure using LangGraph's Annotation system with proper TypeScript types.

2. Create Specialized Worker Agents

// app/lib/agents/workers.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { DynamicStructuredTool } from "@langchain/core/tools";
import { z } from "zod";
import { isEmpty, pick } from "es-toolkit";
import { AgentStateType } from "./types";

// Research Agent with web search capability
export const researchAgent = async (state: AgentStateType) => {
  const model = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-flash",
    temperature: 0.5,
    maxRetries: 2
  });

  const searchTool = new DynamicStructuredTool({
    name: "web_search",
    description: "Search the web for information",
    schema: z.object({
      query: z.string().describe("Search query")
    }),
    func: async ({ query }) => {
      // Mock search - replace with actual API call
      await new Promise(resolve => setTimeout(resolve, 100));
      return `Found information about: ${query}`;
    }
  });

  const prompt = `You are a research specialist. 
    Task: ${state.task}
    Use the search tool to gather relevant information.`;

  const response = await model.invoke(prompt);
  
  return {
    messages: [response],
    researchData: response.content as string
  };
};

// Analysis Agent for data processing
export const analysisAgent = async (state: AgentStateType) => {
  const model = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-flash",
    temperature: 0.3
  });

  const prompt = `You are a data analyst.
    Task: ${state.task}
    Research Data: ${state.researchData}
    
    Analyze this data and provide insights.`;

  const response = await model.invoke(prompt);
  
  return {
    messages: [response],
    analysis: response.content as string
  };
};

// Writer Agent for final output
export const writerAgent = async (state: AgentStateType) => {
  const model = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-pro",
    temperature: 0.7
  });

  const prompt = `You are a professional writer.
    Task: ${state.task}
    Research: ${state.researchData}
    Analysis: ${state.analysis}
    
    Create a comprehensive, well-structured response.`;

  const response = await model.invoke(prompt);
  
  return {
    messages: [response],
    finalOutput: response.content as string
  };
};

Each worker agent specializes in a specific task with appropriate model settings and tools.

3. Implement the Supervisor

// app/lib/agents/supervisor.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { JsonOutputParser } from "@langchain/core/output_parsers";
import { AgentStateType } from "./types";
import { END } from "@langchain/langgraph";

export const supervisorAgent = async (state: AgentStateType) => {
  const model = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-flash",
    temperature: 0
  });

  // Determine next agent based on current state
  const systemPrompt = `You are a supervisor managing a team of agents.
    Current task: ${state.task}
    
    Available agents:
    - researcher: Gathers information
    - analyst: Analyzes data
    - writer: Creates final output
    
    Based on what's been completed:
    - Research data exists: ${!!state.researchData}
    - Analysis exists: ${!!state.analysis}
    - Final output exists: ${!!state.finalOutput}
    
    Return JSON with: { "nextAgent": "agent_name" or "finish" }`;

  const parser = new JsonOutputParser();
  const response = await model.invoke(systemPrompt);
  const decision = await parser.parse(response.content as string);

  return {
    nextAgent: decision.nextAgent === "finish" ? END : decision.nextAgent,
    messages: [response]
  };
};

Supervisor decides which agent to activate next based on current state.

4. Build the Multi-Agent Graph

// app/lib/agents/workflow.ts
import { StateGraph } from "@langchain/langgraph";
import { MemorySaver } from "@langchain/langgraph";
import { START, END } from "@langchain/langgraph";
import { AgentState } from "./types";
import { supervisorAgent } from "./supervisor";
import { researchAgent, analysisAgent, writerAgent } from "./workers";

export function createMultiAgentWorkflow() {
  const workflow = new StateGraph(AgentState)
    // Add nodes for each agent
    .addNode("supervisor", supervisorAgent)
    .addNode("researcher", researchAgent)
    .addNode("analyst", analysisAgent)
    .addNode("writer", writerAgent)
    
    // Define the flow
    .addEdge(START, "supervisor")
    .addConditionalEdges(
      "supervisor",
      // Routing function based on supervisor decision
      (state) => state.nextAgent || END,
      {
        researcher: "researcher",
        analyst: "analyst",
        writer: "writer",
        [END]: END
      }
    )
    
    // Workers return to supervisor
    .addEdge("researcher", "supervisor")
    .addEdge("analyst", "supervisor")
    .addEdge("writer", "supervisor");

  // Compile with memory for conversation persistence
  const memory = new MemorySaver();
  return workflow.compile({ 
    checkpointer: memory,
    recursionLimit: 10 // Prevent infinite loops
  });
}

LangGraph workflow orchestrates agent interactions with proper state management.

5. Create the API Route

// app/api/multi-agent/route.ts
import { NextRequest, NextResponse } from "next/server";
import { createMultiAgentWorkflow } from "@/lib/agents/workflow";
import { HumanMessage } from "@langchain/core/messages";
import { pick } from "es-toolkit";

export const maxDuration = 300; // 5 minutes for long-running tasks

export async function POST(req: NextRequest) {
  try {
    const { task, threadId } = await req.json();
    
    // Initialize workflow
    const workflow = createMultiAgentWorkflow();
    
    // Execute the multi-agent collaboration
    const result = await workflow.invoke(
      {
        task,
        messages: [new HumanMessage(task)]
      },
      {
        configurable: { thread_id: threadId || "default" },
        recursionLimit: 10
      }
    );
    
    // Return only relevant fields
    const response = pick(result, ["finalOutput", "analysis", "researchData"]);
    
    return NextResponse.json({ 
      success: true, 
      result: response 
    });
  } catch (error) {
    console.error("Multi-agent error:", error);
    return NextResponse.json(
      { error: "Processing failed" },
      { status: 500 }
    );
  }
}

API route handles multi-agent orchestration with Vercel's extended timeout support.

6. Frontend Integration with React Query

// app/components/MultiAgentInterface.tsx
"use client";

import { useMutation } from "@tanstack/react-query";
import { useState } from "react";
import { debounce } from "es-toolkit";

interface MultiAgentResult {
  finalOutput: string;
  analysis: string;
  researchData: string;
}

async function runMultiAgentTask(task: string): Promise<MultiAgentResult> {
  const response = await fetch("/api/multi-agent", {
    method: "POST",
    headers: { "Content-Type": "application/json" },
    body: JSON.stringify({ task, threadId: Date.now().toString() })
  });
  
  if (!response.ok) throw new Error("Agent execution failed");
  const data = await response.json();
  return data.result;
}

export default function MultiAgentInterface() {
  const [task, setTask] = useState("");
  
  const mutation = useMutation({
    mutationFn: runMultiAgentTask,
    onSuccess: (data) => {
      console.log("Task completed:", data);
    }
  });
  
  const handleSubmit = debounce((e: React.FormEvent) => {
    e.preventDefault();
    if (task.trim()) {
      mutation.mutate(task);
    }
  }, 500);
  
  return (
    <div className="card bg-base-100 shadow-xl">
      <div className="card-body">
        <h2 className="card-title">Multi-Agent Task Processor</h2>
        
        <form onSubmit={handleSubmit}>
          <textarea
            className="textarea textarea-bordered w-full"
            placeholder="Describe your task..."
            value={task}
            onChange={(e) => setTask(e.target.value)}
            rows={4}
          />
          
          <button 
            type="submit" 
            className="btn btn-primary mt-4"
            disabled={mutation.isPending}
          >
            {mutation.isPending ? (
              <span className="loading loading-spinner" />
            ) : (
              "Process Task"
            )}
          </button>
        </form>
        
        {mutation.data && (
          <div className="mt-6 space-y-4">
            <div className="collapse collapse-arrow bg-base-200">
              <input type="checkbox" defaultChecked />
              <div className="collapse-title font-medium">
                Final Output
              </div>
              <div className="collapse-content">
                <p>{mutation.data.finalOutput}</p>
              </div>
            </div>
            
            <div className="collapse collapse-arrow bg-base-200">
              <input type="checkbox" />
              <div className="collapse-title font-medium">
                Analysis
              </div>
              <div className="collapse-content">
                <p>{mutation.data.analysis}</p>
              </div>
            </div>
          </div>
        )}
      </div>
    </div>
  );
}

React component manages multi-agent interaction with proper loading states and result display.

Advanced Example: Parallel Collaboration with Debate

Now let's implement a sophisticated multi-agent system where agents work in parallel and debate to reach consensus.

1. Define Advanced State with Voting

// app/lib/agents/advanced-types.ts
import { Annotation } from "@langchain/langgraph";
import { z } from "zod";

export const DebateOpinion = z.object({
  agent: z.string(),
  opinion: z.string(),
  confidence: z.number().min(0).max(1),
  reasoning: z.string()
});

export const DebateState = Annotation.Root({
  question: Annotation<string>(),
  round: Annotation<number>({ default: () => 0 }),
  opinions: Annotation<z.infer<typeof DebateOpinion>[]>({
    reducer: (x, y) => [...x, ...y],
    default: () => []
  }),
  consensus: Annotation<string>(),
  hasConverged: Annotation<boolean>({ default: () => false })
});

export type DebateStateType = typeof DebateState.State;

Advanced state tracks opinions, confidence scores, and consensus status.

2. Implement Parallel Expert Agents

// app/lib/agents/expert-agents.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { DebateStateType, DebateOpinion } from "./advanced-types";
import { chunk, groupBy, maxBy } from "es-toolkit";

// Base expert agent factory
function createExpertAgent(expertise: string, model: string = "gemini-2.5-flash") {
  return async (state: DebateStateType): Promise<Partial<DebateStateType>> => {
    const llm = new ChatGoogleGenerativeAI({
      model,
      temperature: 0.7,
      maxRetries: 2
    });
    
    // Get other agents' opinions from previous round
    const otherOpinions = state.opinions
      .filter(op => op.agent !== expertise)
      .slice(-3); // Last 3 opinions from others
    
    const prompt = `You are an expert in ${expertise}.
      Question: ${state.question}
      Round: ${state.round}
      
      ${otherOpinions.length > 0 ? `
      Other experts' opinions:
      ${otherOpinions.map(op => 
        `${op.agent}: ${op.opinion} (confidence: ${op.confidence})`
      ).join('\n')}
      ` : ''}
      
      Provide your opinion with reasoning and confidence (0-1).
      Format: { "opinion": "...", "reasoning": "...", "confidence": 0.8 }`;
    
    const response = await llm.invoke(prompt);
    const parsed = JSON.parse(response.content as string);
    
    return {
      opinions: [{
        agent: expertise,
        opinion: parsed.opinion,
        reasoning: parsed.reasoning,
        confidence: parsed.confidence
      }]
    };
  };
}

// Create specialized experts
export const technicalExpert = createExpertAgent("technical_analysis");
export const businessExpert = createExpertAgent("business_strategy");
export const userExpert = createExpertAgent("user_experience");
export const securityExpert = createExpertAgent("security", "gemini-2.5-pro");

Expert agents provide specialized perspectives with confidence scores.

3. Parallel Execution Coordinator

// app/lib/agents/parallel-coordinator.ts
import { DebateStateType } from "./advanced-types";
import { 
  technicalExpert, 
  businessExpert, 
  userExpert, 
  securityExpert 
} from "./expert-agents";
import { chunk, meanBy, partition } from "es-toolkit";

export async function parallelExpertAnalysis(
  state: DebateStateType
): Promise<Partial<DebateStateType>> {
  const experts = [
    technicalExpert,
    businessExpert,
    userExpert,
    securityExpert
  ];
  
  // Execute all experts in parallel with timeout
  const expertPromises = experts.map(expert => 
    Promise.race([
      expert(state),
      new Promise<Partial<DebateStateType>>((_, reject) => 
        setTimeout(() => reject(new Error("Timeout")), 10000)
      )
    ])
  );
  
  const results = await Promise.allSettled(expertPromises);
  
  // Collect successful opinions
  const newOpinions = results
    .filter(r => r.status === "fulfilled")
    .flatMap(r => (r as PromiseFulfilledResult<any>).value.opinions || []);
  
  // Check for convergence
  const hasConverged = checkConvergence(newOpinions);
  
  return {
    opinions: newOpinions,
    round: state.round + 1,
    hasConverged
  };
}

function checkConvergence(opinions: any[]): boolean {
  if (opinions.length < 3) return false;
  
  // Group opinions by similarity (simplified)
  const groups = groupBy(opinions, op => 
    op.opinion.toLowerCase().includes("agree") ? "agree" : "disagree"
  );
  
  // Check if majority agrees
  const largestGroup = Object.values(groups)
    .reduce((a, b) => a.length > b.length ? a : b, []);
  
  // Convergence if >75% agree and average confidence >0.7
  const convergenceRatio = largestGroup.length / opinions.length;
  const avgConfidence = meanBy(largestGroup, op => op.confidence);
  
  return convergenceRatio > 0.75 && avgConfidence > 0.7;
}

Coordinator manages parallel execution with timeout handling and convergence detection.

4. Consensus Building with Voting

// app/lib/agents/consensus.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { DebateStateType } from "./advanced-types";
import { groupBy, maxBy, sortBy } from "es-toolkit";

export async function buildConsensus(
  state: DebateStateType
): Promise<Partial<DebateStateType>> {
  const model = new ChatGoogleGenerativeAI({
    model: "gemini-2.5-pro",
    temperature: 0.3
  });
  
  // Get recent opinions for each agent
  const recentOpinions = Object.values(
    groupBy(state.opinions, op => op.agent)
  ).map(group => maxBy(group, op => op.confidence));
  
  // Sort by confidence
  const sortedOpinions = sortBy(
    recentOpinions,
    op => -(op?.confidence || 0)
  );
  
  const prompt = `Synthesize these expert opinions into a consensus:
    Question: ${state.question}
    
    Expert Opinions:
    ${sortedOpinions.map(op => `
      ${op?.agent} (confidence: ${op?.confidence}):
      Opinion: ${op?.opinion}
      Reasoning: ${op?.reasoning}
    `).join('\n')}
    
    Create a balanced consensus that addresses key points from all experts.
    Weight opinions by confidence scores.`;
  
  const response = await model.invoke(prompt);
  
  return {
    consensus: response.content as string,
    hasConverged: true
  };
}

// Voting mechanism for deadlock resolution
export function majorityVote(opinions: any[]): string {
  const voteGroups = groupBy(opinions, op => op.opinion);
  const weightedVotes = Object.entries(voteGroups).map(([opinion, votes]) => ({
    opinion,
    totalWeight: votes.reduce((sum, v) => sum + v.confidence, 0)
  }));
  
  const winner = maxBy(weightedVotes, v => v.totalWeight);
  return winner?.opinion || "No consensus reached";
}

Consensus building synthesizes diverse opinions with confidence weighting.

5. Complete Debate Workflow

// app/lib/agents/debate-workflow.ts
import { StateGraph, START, END } from "@langchain/langgraph";
import { DebateState } from "./advanced-types";
import { parallelExpertAnalysis } from "./parallel-coordinator";
import { buildConsensus } from "./consensus";

export function createDebateWorkflow() {
  const MAX_ROUNDS = 5;
  
  const workflow = new StateGraph(DebateState)
    .addNode("parallel_experts", parallelExpertAnalysis)
    .addNode("consensus", buildConsensus)
    
    // Start with parallel expert analysis
    .addEdge(START, "parallel_experts")
    
    // Conditional routing based on convergence
    .addConditionalEdges(
      "parallel_experts",
      (state) => {
        if (state.hasConverged || state.round >= MAX_ROUNDS) {
          return "consensus";
        }
        return "parallel_experts"; // Continue debate
      },
      {
        parallel_experts: "parallel_experts",
        consensus: "consensus"
      }
    )
    
    .addEdge("consensus", END);
  
  return workflow.compile({
    recursionLimit: MAX_ROUNDS * 2
  });
}

Workflow implements iterative debate with automatic convergence detection.

6. Streaming API for Real-time Updates

// app/api/debate/route.ts
import { NextRequest } from "next/server";
import { createDebateWorkflow } from "@/lib/agents/debate-workflow";

export const maxDuration = 300;

export async function POST(req: NextRequest) {
  const { question } = await req.json();
  
  const encoder = new TextEncoder();
  const stream = new ReadableStream({
    async start(controller) {
      const workflow = createDebateWorkflow();
      
      try {
        // Stream events as they occur
        for await (const event of workflow.stream(
          { question, round: 0 },
          { streamMode: "values" }
        )) {
          const data = JSON.stringify({
            type: "update",
            round: event.round,
            opinions: event.opinions?.slice(-4), // Last 4 opinions
            hasConverged: event.hasConverged
          });
          
          controller.enqueue(
            encoder.encode(`data: ${data}\n\n`)
          );
        }
        
        // Send final consensus
        const final = await workflow.invoke({ question, round: 0 });
        controller.enqueue(
          encoder.encode(`data: ${JSON.stringify({
            type: "complete",
            consensus: final.consensus
          })}\n\n`)
        );
      } catch (error) {
        controller.enqueue(
          encoder.encode(`data: ${JSON.stringify({
            type: "error",
            message: "Debate failed"
          })}\n\n`)
        );
      } finally {
        controller.close();
      }
    }
  });
  
  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      "Connection": "keep-alive"
    }
  });
}

Streaming API provides real-time updates during the debate process.

7. Real-time Debate Visualization

// app/components/DebateVisualization.tsx
"use client";

import { useEffect, useState } from "react";
import { groupBy, sortBy } from "es-toolkit";

interface Opinion {
  agent: string;
  opinion: string;
  confidence: number;
  reasoning: string;
}

export default function DebateVisualization() {
  const [question, setQuestion] = useState("");
  const [isDebating, setIsDebating] = useState(false);
  const [rounds, setRounds] = useState<Opinion[][]>([]);
  const [consensus, setConsensus] = useState<string>("");
  
  const startDebate = async () => {
    setIsDebating(true);
    setRounds([]);
    setConsensus("");
    
    const response = await fetch("/api/debate", {
      method: "POST",
      headers: { "Content-Type": "application/json" },
      body: JSON.stringify({ question })
    });
    
    const reader = response.body?.getReader();
    const decoder = new TextDecoder();
    
    while (reader) {
      const { done, value } = await reader.read();
      if (done) break;
      
      const chunk = decoder.decode(value);
      const lines = chunk.split('\n');
      
      for (const line of lines) {
        if (line.startsWith('data: ')) {
          const data = JSON.parse(line.slice(6));
          
          if (data.type === "update" && data.opinions) {
            setRounds(prev => [...prev, data.opinions]);
          } else if (data.type === "complete") {
            setConsensus(data.consensus);
            setIsDebating(false);
          }
        }
      }
    }
  };
  
  return (
    <div className="container mx-auto p-4">
      <div className="card bg-base-100 shadow-xl mb-6">
        <div className="card-body">
          <h2 className="card-title">Multi-Agent Debate System</h2>
          
          <input
            type="text"
            placeholder="Enter a question for debate..."
            className="input input-bordered w-full"
            value={question}
            onChange={(e) => setQuestion(e.target.value)}
          />
          
          <button
            className="btn btn-primary"
            onClick={startDebate}
            disabled={isDebating || !question}
          >
            {isDebating ? (
              <>
                <span className="loading loading-spinner" />
                Debating...
              </>
            ) : (
              "Start Debate"
            )}
          </button>
        </div>
      </div>
      
      {/* Debate Rounds Visualization */}
      {rounds.length > 0 && (
        <div className="grid grid-cols-1 md:grid-cols-2 gap-4 mb-6">
          {rounds[rounds.length - 1].map((opinion, idx) => (
            <div key={idx} className="card bg-base-200">
              <div className="card-body">
                <div className="flex justify-between items-center">
                  <h3 className="font-bold">{opinion.agent}</h3>
                  <div className="badge badge-primary">
                    {Math.round(opinion.confidence * 100)}% confident
                  </div>
                </div>
                <p className="text-sm mt-2">{opinion.opinion}</p>
                <progress 
                  className="progress progress-primary w-full" 
                  value={opinion.confidence} 
                  max="1"
                />
              </div>
            </div>
          ))}
        </div>
      )}
      
      {/* Consensus Result */}
      {consensus && (
        <div className="alert alert-success">
          <div>
            <h3 className="font-bold">Consensus Reached:</h3>
            <p>{consensus}</p>
          </div>
        </div>
      )}
    </div>
  );
}

Real-time visualization shows debate progress with confidence indicators.

Conclusion

Multi-Agent Collaboration transforms complex problem-solving by distributing tasks across specialized agents. The supervisor-worker pattern provides centralized control ideal for customer service and research tasks, while parallel debate systems excel at gathering diverse perspectives for critical decisions. Key considerations include managing coordination overhead (keep teams under 7 agents), implementing proper timeout handling for serverless environments, and using confidence-weighted voting for consensus building. With LangGraph's stateful orchestration and Vercel's serverless infrastructure, you can build production-ready multi-agent systems that achieve 2-10x productivity gains while maintaining quality through sophisticated coordination patterns.