DRAFT Agentic Design Patterns - Human-in-the-Loop

ailangchainlanggraphhitltypescriptvercel
By sko X opus 4.19/21/202512 min read

Build intelligent agents that know when to ask for help. This guide shows how to implement Human-in-the-Loop (HITL) patterns in TypeScript using LangGraph and LangChain on Vercel's serverless platform, enabling agents to seamlessly collaborate with humans for critical decisions.

Mental Model: The Air Traffic Control Pattern

Think of HITL like air traffic control at a busy airport. AI agents are like autopilot systems - they handle routine flights efficiently, managing thousands of operations. But when storms hit, emergencies arise, or complex situations emerge, human controllers step in. The system doesn't stop; instead, it seamlessly transitions control while maintaining all context. In serverless environments, this becomes even more crucial - you need to pause workflows, store state, notify humans, and resume execution, all within platform constraints like Vercel's execution limits.

Basic Example: Approval-Based Customer Support Agent

1. Install HITL Dependencies

npm install @langchain/langgraph @langchain/core @langchain/google-genai
npm install @upstash/redis @upstash/qstash uuid
npm install zod es-toolkit

Adds LangGraph for workflow orchestration, Upstash for state persistence and queue management, and es-toolkit for utility functions.

2. Create Basic HITL Workflow with Interrupt

// lib/hitl-workflow.ts
import { MemorySaver, Annotation, interrupt, Command, StateGraph } from "@langchain/langgraph";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage, AIMessage } from "@langchain/core/messages";
import { Redis } from "@upstash/redis";
import { debounce } from "es-toolkit";

const redis = Redis.fromEnv();

// Define workflow state
const StateAnnotation = Annotation.Root({
  messages: Annotation<Array<HumanMessage | AIMessage>>({
    reducer: (curr, prev) => [...prev, ...curr],
    default: () => []
  }),
  customerQuery: Annotation<string>(),
  agentResponse: Annotation<string>(),
  humanFeedback: Annotation<string>().optional(),
  approved: Annotation<boolean>().optional()
});

// AI processing node
async function processQuery(state: typeof StateAnnotation.State) {
  const model = new ChatGoogleGenerativeAI({
    modelName: "gemini-2.5-flash",
    temperature: 0.3
  });

  const response = await model.invoke(state.messages);
  
  return {
    agentResponse: response.content as string,
    messages: [response]
  };
}

// Human approval node with interrupt
async function humanApproval(state: typeof StateAnnotation.State): Promise<Command> {
  // Check if this involves sensitive actions
  const sensitiveKeywords = ["refund", "cancel", "delete", "payment"];
  const needsApproval = sensitiveKeywords.some(keyword => 
    state.agentResponse.toLowerCase().includes(keyword)
  );

  if (!needsApproval) {
    return new Command({
      goto: "send_response",
      update: { approved: true }
    });
  }

  // Interrupt workflow for human review
  const decision = await interrupt({
    question: "Approve this response?",
    agentResponse: state.agentResponse,
    customerQuery: state.customerQuery,
    requiresApproval: true
  });

  if (decision.approved) {
    return new Command({
      goto: "send_response",
      update: { 
        approved: true,
        humanFeedback: decision.feedback 
      }
    });
  }

  return new Command({
    goto: "revise_response",
    update: { 
      approved: false,
      humanFeedback: decision.feedback 
    }
  });
}

// Create and compile workflow
export function createHITLWorkflow() {
  const workflow = new StateGraph(StateAnnotation)
    .addNode("process_query", processQuery)
    .addNode("human_approval", humanApproval)
    .addNode("send_response", sendResponse)
    .addNode("revise_response", reviseResponse)
    .addEdge("process_query", "human_approval")
    .addConditionalEdges("human_approval", (state) => {
      return state.approved ? "send_response" : "revise_response";
    })
    .addEdge("revise_response", "human_approval");

  const checkpointer = new MemorySaver();
  return workflow.compile({ checkpointer });
}

Creates a workflow where AI responses containing sensitive actions trigger human review, with state persistence for async approval.

3. Serverless API Routes for HITL

// app/api/hitl/chat/route.ts
import { NextRequest } from "next/server";
import { createHITLWorkflow } from "@/lib/hitl-workflow";
import { QStash } from "@upstash/qstash";
import { v4 as uuidv4 } from "uuid";

export const maxDuration = 10; // Hobby plan limit

const qstash = new QStash({ token: process.env.QSTASH_TOKEN! });

export async function POST(req: NextRequest) {
  const { message, sessionId = uuidv4() } = await req.json();
  
  const workflow = createHITLWorkflow();
  const config = {
    configurable: {
      thread_id: sessionId,
      checkpoint_ns: "hitl"
    }
  };

  // Start workflow execution
  const initialState = {
    customerQuery: message,
    messages: [{ role: "human", content: message }]
  };

  // Queue the workflow execution
  await qstash.publishJSON({
    url: `${process.env.VERCEL_URL}/api/hitl/process`,
    body: { 
      sessionId,
      state: initialState,
      config 
    },
    retries: 3
  });

  return Response.json({
    sessionId,
    status: "processing",
    message: "Your request is being processed"
  });
}

// app/api/hitl/resume/route.ts
export async function POST(req: NextRequest) {
  const { sessionId, decision } = await req.json();
  
  const workflow = createHITLWorkflow();
  const config = {
    configurable: {
      thread_id: sessionId,
      checkpoint_ns: "hitl"
    }
  };

  // Resume from interrupt
  const result = await workflow.invoke(
    new Command({ resume: decision }),
    config
  );

  return Response.json(result);
}

Implements queue-based processing to handle Vercel's execution limits, with separate endpoints for starting and resuming workflows.

4. Real-time Status Updates with SSE

// app/api/hitl/status/[sessionId]/route.ts
import { NextRequest } from "next/server";
import { Redis } from "@upstash/redis";

const redis = Redis.fromEnv();

export async function GET(
  req: NextRequest,
  { params }: { params: { sessionId: string }}
) {
  const encoder = new TextEncoder();
  
  const stream = new ReadableStream({
    async start(controller) {
      const sendUpdate = async () => {
        const state = await redis.hgetall(`session:${params.sessionId}`);
        
        if (state?.interrupt) {
          controller.enqueue(
            encoder.encode(`data: ${JSON.stringify({
              type: "approval_needed",
              data: state.interrupt
            })}\n\n`)
          );
        }
        
        if (state?.status === "completed") {
          controller.enqueue(
            encoder.encode(`data: ${JSON.stringify({
              type: "completed",
              response: state.response
            })}\n\n`)
          );
          controller.close();
        } else {
          setTimeout(sendUpdate, 1000);
        }
      };
      
      await sendUpdate();
    }
  });

  return new Response(stream, {
    headers: {
      "Content-Type": "text/event-stream",
      "Cache-Control": "no-cache",
      "Connection": "keep-alive"
    }
  });
}

Provides real-time updates to clients about workflow status and approval requests using Server-Sent Events.

5. React UI for Human Approval

// components/HITLInterface.tsx
"use client";

import { useState, useEffect } from "react";
import { useMutation } from "@tanstack/react-query";

export default function HITLInterface({ sessionId }: { sessionId: string }) {
  const [approvalRequest, setApprovalRequest] = useState(null);
  const [status, setStatus] = useState("waiting");

  // Listen for approval requests
  useEffect(() => {
    const eventSource = new EventSource(`/api/hitl/status/${sessionId}`);
    
    eventSource.onmessage = (event) => {
      const data = JSON.parse(event.data);
      
      if (data.type === "approval_needed") {
        setApprovalRequest(data.data);
        setStatus("pending_approval");
      } else if (data.type === "completed") {
        setStatus("completed");
      }
    };

    return () => eventSource.close();
  }, [sessionId]);

  const approveMutation = useMutation({
    mutationFn: async (decision: any) => {
      const response = await fetch("/api/hitl/resume", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify({ sessionId, decision })
      });
      return response.json();
    }
  });

  if (status === "pending_approval" && approvalRequest) {
    return (
      <div className="card bg-base-100 shadow-xl">
        <div className="card-body">
          <h2 className="card-title">Human Approval Required</h2>
          
          <div className="divider">Customer Query</div>
          <p className="text-sm">{approvalRequest.customerQuery}</p>
          
          <div className="divider">Agent Response</div>
          <div className="bg-base-200 p-4 rounded">
            <p>{approvalRequest.agentResponse}</p>
          </div>
          
          <div className="form-control">
            <label className="label">
              <span className="label-text">Feedback (optional)</span>
            </label>
            <textarea 
              className="textarea textarea-bordered"
              placeholder="Add feedback for revision..."
              id="feedback"
            />
          </div>
          
          <div className="card-actions justify-end mt-4">
            <button 
              className="btn btn-error"
              onClick={() => {
                const feedback = document.getElementById("feedback").value;
                approveMutation.mutate({ 
                  approved: false, 
                  feedback 
                });
              }}
            >
              Reject & Revise
            </button>
            <button 
              className="btn btn-success"
              onClick={() => approveMutation.mutate({ approved: true })}
            >
              Approve
            </button>
          </div>
        </div>
      </div>
    );
  }

  return (
    <div className="alert alert-info">
      <span>Status: {status}</span>
    </div>
  );
}

React component providing an intuitive interface for human reviewers to approve or reject agent responses.

Advanced Example: Multi-Agent System with Escalation Patterns

1. Complex HITL Architecture with Fallbacks

// lib/advanced-hitl-system.ts
import { StateGraph, Annotation, Command, interrupt } from "@langchain/langgraph";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { Redis } from "@upstash/redis";
import { QStash } from "@upstash/qstash";
import { groupBy, sortBy, pick } from "es-toolkit";
import { z } from "zod";

const redis = Redis.fromEnv();
const qstash = new QStash({ token: process.env.QSTASH_TOKEN! });

// Enhanced state with confidence and escalation tracking
const EnhancedStateAnnotation = Annotation.Root({
  query: Annotation<string>(),
  context: Annotation<Record<string, any>>(),
  agentResponse: Annotation<string>(),
  confidence: Annotation<number>(),
  escalationLevel: Annotation<number>({ default: () => 0 }),
  reviewHistory: Annotation<Array<{
    level: number;
    reviewer: string;
    decision: string;
    feedback?: string;
    timestamp: Date;
  }>>({ default: () => [] }),
  finalResponse: Annotation<string>().optional()
});

// Confidence assessment node
async function assessConfidence(state: typeof EnhancedStateAnnotation.State) {
  const model = new ChatGoogleGenerativeAI({
    modelName: "gemini-2.5-pro",
    temperature: 0
  });

  const prompt = `
    Assess the confidence level for this response.
    Query: ${state.query}
    Response: ${state.agentResponse}
    
    Provide a confidence score from 0-1 and identify any risks.
    Format: { "confidence": 0.X, "risks": ["risk1", "risk2"] }
  `;

  const result = await model.invoke(prompt);
  const assessment = JSON.parse(result.content as string);

  return {
    confidence: assessment.confidence,
    context: { ...state.context, risks: assessment.risks }
  };
}

// Hierarchical escalation system
class EscalationManager {
  private levels = [
    { threshold: 0.8, handler: "auto_approve", timeout: 0 },
    { threshold: 0.6, handler: "peer_review", timeout: 300000 }, // 5 min
    { threshold: 0.4, handler: "supervisor_review", timeout: 600000 }, // 10 min
    { threshold: 0.0, handler: "expert_panel", timeout: 1800000 } // 30 min
  ];

  async determineEscalation(confidence: number, currentLevel: number) {
    const appropriateLevel = sortBy(
      this.levels.filter(l => confidence <= l.threshold),
      l => -l.threshold
    )[0];

    if (!appropriateLevel) return "auto_approve";

    // Check if we need to escalate further
    const levelIndex = this.levels.findIndex(
      l => l.handler === appropriateLevel.handler
    );

    if (levelIndex > currentLevel) {
      await this.notifyReviewers(appropriateLevel.handler);
      return appropriateLevel.handler;
    }

    return appropriateLevel.handler;
  }

  private async notifyReviewers(level: string) {
    // Send notifications via different channels based on urgency
    const notifications = {
      peer_review: { channel: "slack", urgency: "normal" },
      supervisor_review: { channel: "slack", urgency: "high" },
      expert_panel: { channel: "pagerduty", urgency: "critical" }
    };

    const config = notifications[level];
    if (config) {
      await qstash.publishJSON({
        url: `${process.env.NOTIFICATION_WEBHOOK}`,
        body: {
          level,
          channel: config.channel,
          urgency: config.urgency,
          timestamp: new Date()
        }
      });
    }
  }
}

// Human review with timeout and fallback
async function humanReviewWithFallback(
  state: typeof EnhancedStateAnnotation.State
): Promise<Command> {
  const escalationManager = new EscalationManager();
  const handler = await escalationManager.determineEscalation(
    state.confidence,
    state.escalationLevel
  );

  if (handler === "auto_approve") {
    return new Command({
      goto: "finalize_response",
      update: { finalResponse: state.agentResponse }
    });
  }

  // Set up timeout for human response
  const timeoutMs = 300000; // 5 minutes default
  const startTime = Date.now();

  try {
    const decision = await Promise.race([
      interrupt({
        level: handler,
        query: state.query,
        response: state.agentResponse,
        confidence: state.confidence,
        risks: state.context.risks
      }),
      new Promise((_, reject) => 
        setTimeout(() => reject(new Error("Timeout")), timeoutMs)
      )
    ]);

    // Record review history
    const reviewRecord = {
      level: state.escalationLevel,
      reviewer: decision.reviewer || handler,
      decision: decision.approved ? "approved" : "rejected",
      feedback: decision.feedback,
      timestamp: new Date()
    };

    if (decision.approved) {
      return new Command({
        goto: "finalize_response",
        update: {
          finalResponse: decision.editedResponse || state.agentResponse,
          reviewHistory: [...state.reviewHistory, reviewRecord]
        }
      });
    } else {
      return new Command({
        goto: "revise_with_feedback",
        update: {
          reviewHistory: [...state.reviewHistory, reviewRecord],
          escalationLevel: state.escalationLevel + 1
        }
      });
    }
  } catch (error) {
    // Timeout occurred - implement fallback
    console.warn(`Review timeout at level ${handler}`);
    
    if (state.escalationLevel < 2) {
      // Escalate to next level
      return new Command({
        goto: "human_review",
        update: { escalationLevel: state.escalationLevel + 1 }
      });
    } else {
      // Final fallback - use safe default
      return new Command({
        goto: "apply_safe_default",
        update: {
          finalResponse: "I need more time to provide an accurate response. A specialist will contact you shortly."
        }
      });
    }
  }
}

// Create advanced workflow
export function createAdvancedHITLWorkflow() {
  const workflow = new StateGraph(EnhancedStateAnnotation)
    .addNode("generate_response", generateResponse)
    .addNode("assess_confidence", assessConfidence)
    .addNode("human_review", humanReviewWithFallback)
    .addNode("revise_with_feedback", reviseWithFeedback)
    .addNode("apply_safe_default", applySafeDefault)
    .addNode("finalize_response", finalizeResponse)
    .addEdge("generate_response", "assess_confidence")
    .addEdge("assess_confidence", "human_review")
    .addEdge("revise_with_feedback", "assess_confidence")
    .addConditionalEdges("human_review", (state) => {
      if (state.finalResponse) return "finalize_response";
      if (state.escalationLevel > 2) return "apply_safe_default";
      return "human_review";
    });

  return workflow.compile({
    checkpointer: new MemorySaver()
  });
}

Implements sophisticated escalation hierarchy with timeouts, fallbacks, and multi-level review based on confidence scores.

2. Distributed HITL Processing for Scale

// lib/distributed-hitl.ts
import { QStash } from "@upstash/qstash";
import { Redis } from "@upstash/redis";
import { chunk, partition, map } from "es-toolkit";

const redis = Redis.fromEnv();
const qstash = new QStash({ token: process.env.QSTASH_TOKEN! });

export class DistributedHITLProcessor {
  private maxBatchSize = 10;
  private maxConcurrent = 5;

  async processLargeWorkload(tasks: Array<any>, workflowId: string) {
    // Partition tasks by priority
    const [highPriority, normalPriority] = partition(
      tasks,
      t => t.priority === "high"
    );

    // Process high priority immediately
    await this.processBatch(highPriority, workflowId, "high");

    // Queue normal priority in chunks
    const chunks = chunk(normalPriority, this.maxBatchSize);
    
    for (let i = 0; i < chunks.length; i++) {
      await qstash.publishJSON({
        url: `${process.env.VERCEL_URL}/api/hitl/batch-process`,
        body: {
          workflowId,
          batchId: i,
          tasks: chunks[i]
        },
        delay: i * 5 // Stagger by 5 seconds
      });
    }

    // Store workflow metadata
    await redis.hset(`workflow:${workflowId}`, {
      totalTasks: tasks.length,
      totalBatches: chunks.length,
      startTime: Date.now(),
      status: "processing"
    });
  }

  private async processBatch(
    tasks: Array<any>,
    workflowId: string,
    priority: string
  ) {
    const results = await Promise.allSettled(
      tasks.map(task => this.processTask(task, priority))
    );

    // Store results
    await redis.hset(`workflow:${workflowId}:results`, {
      [`batch_${Date.now()}`]: JSON.stringify(results)
    });

    // Update progress
    await redis.hincrby(`workflow:${workflowId}`, "completed", tasks.length);
  }

  private async processTask(task: any, priority: string) {
    // Implement task processing with appropriate HITL based on priority
    if (priority === "high") {
      // Direct human review
      return await this.requestImmediateReview(task);
    } else {
      // AI with optional escalation
      return await this.processWithOptionalReview(task);
    }
  }
}

// API route for batch processing
// app/api/hitl/batch-process/route.ts
export const maxDuration = 300; // Pro plan with Fluid Compute

export async function POST(req: NextRequest) {
  const { workflowId, batchId, tasks } = await req.json();
  
  const processor = new DistributedHITLProcessor();
  
  try {
    await processor.processBatch(tasks, workflowId, "normal");
    
    // Check if all batches complete
    const workflow = await redis.hgetall(`workflow:${workflowId}`);
    const completed = parseInt(workflow.completed || "0");
    const total = parseInt(workflow.totalTasks || "0");
    
    if (completed >= total) {
      await redis.hset(`workflow:${workflowId}`, {
        status: "completed",
        endTime: Date.now()
      });
      
      // Notify completion
      await notifyCompletion(workflowId);
    }
    
    return Response.json({ success: true, batchId });
  } catch (error) {
    console.error(`Batch ${batchId} failed:`, error);
    
    // Retry logic
    await qstash.publishJSON({
      url: `${process.env.VERCEL_URL}/api/hitl/batch-process`,
      body: { workflowId, batchId, tasks },
      delay: 60 // Retry after 1 minute
    });
    
    return Response.json({ error: "Processing failed, retrying" });
  }
}

Scales HITL processing across multiple serverless functions with priority queuing and batch management.

3. UI Dashboard for HITL Management

// components/HITLDashboard.tsx
"use client";

import { useQuery, useMutation } from "@tanstack/react-query";
import { useState, useEffect } from "react";
import { groupBy, sortBy } from "es-toolkit";

interface ReviewTask {
  id: string;
  query: string;
  response: string;
  confidence: number;
  level: string;
  timestamp: Date;
  deadline?: Date;
}

export default function HITLDashboard() {
  const [selectedTask, setSelectedTask] = useState<ReviewTask | null>(null);
  const [filter, setFilter] = useState("all");

  // Fetch pending reviews
  const { data: tasks = [], refetch } = useQuery({
    queryKey: ["pending-reviews"],
    queryFn: async () => {
      const response = await fetch("/api/hitl/pending");
      return response.json();
    },
    refetchInterval: 5000 // Poll every 5 seconds
  });

  // Group tasks by urgency
  const groupedTasks = groupBy(
    sortBy(tasks, t => t.confidence),
    t => {
      if (t.confidence < 0.4) return "critical";
      if (t.confidence < 0.6) return "high";
      if (t.confidence < 0.8) return "medium";
      return "low";
    }
  );

  const reviewMutation = useMutation({
    mutationFn: async (decision: any) => {
      const response = await fetch("/api/hitl/review", {
        method: "POST",
        headers: { "Content-Type": "application/json" },
        body: JSON.stringify(decision)
      });
      return response.json();
    },
    onSuccess: () => {
      setSelectedTask(null);
      refetch();
    }
  });

  return (
    <div className="container mx-auto p-4">
      <div className="grid grid-cols-1 lg:grid-cols-3 gap-4">
        {/* Task List */}
        <div className="lg:col-span-1">
          <div className="card bg-base-100 shadow">
            <div className="card-body">
              <h2 className="card-title">Pending Reviews</h2>
              
              {/* Filter Tabs */}
              <div className="tabs tabs-boxed mb-4">
                <button 
                  className={`tab ${filter === "all" ? "tab-active" : ""}`}
                  onClick={() => setFilter("all")}
                >
                  All ({tasks.length})
                </button>
                <button 
                  className={`tab ${filter === "critical" ? "tab-active" : ""}`}
                  onClick={() => setFilter("critical")}
                >
                  Critical ({groupedTasks.critical?.length || 0})
                </button>
                <button 
                  className={`tab ${filter === "high" ? "tab-active" : ""}`}
                  onClick={() => setFilter("high")}
                >
                  High ({groupedTasks.high?.length || 0})
                </button>
              </div>
              
              {/* Task Items */}
              <div className="space-y-2">
                {(filter === "all" ? tasks : groupedTasks[filter] || [])
                  .map(task => (
                    <div 
                      key={task.id}
                      className={`card card-compact bg-base-200 cursor-pointer hover:bg-base-300 ${
                        selectedTask?.id === task.id ? "ring-2 ring-primary" : ""
                      }`}
                      onClick={() => setSelectedTask(task)}
                    >
                      <div className="card-body">
                        <div className="flex justify-between items-start">
                          <div className="flex-1">
                            <p className="text-sm truncate">{task.query}</p>
                            <div className="flex gap-2 mt-1">
                              <span className={`badge badge-sm ${
                                task.confidence < 0.4 ? "badge-error" :
                                task.confidence < 0.6 ? "badge-warning" :
                                "badge-success"
                              }`}>
                                {(task.confidence * 100).toFixed(0)}%
                              </span>
                              <span className="badge badge-sm badge-outline">
                                {task.level}
                              </span>
                            </div>
                          </div>
                          {task.deadline && (
                            <span className="text-xs text-warning">
                              {new Date(task.deadline).toLocaleTimeString()}
                            </span>
                          )}
                        </div>
                      </div>
                    </div>
                  ))}
              </div>
            </div>
          </div>
        </div>

        {/* Review Panel */}
        <div className="lg:col-span-2">
          {selectedTask ? (
            <div className="card bg-base-100 shadow">
              <div className="card-body">
                <h2 className="card-title">Review Task</h2>
                
                {/* Task Details */}
                <div className="space-y-4">
                  <div>
                    <label className="label">
                      <span className="label-text font-semibold">Customer Query</span>
                    </label>
                    <div className="bg-base-200 p-3 rounded">
                      {selectedTask.query}
                    </div>
                  </div>
                  
                  <div>
                    <label className="label">
                      <span className="label-text font-semibold">AI Response</span>
                      <span className="label-text-alt">
                        Confidence: {(selectedTask.confidence * 100).toFixed(1)}%
                      </span>
                    </label>
                    <div className="bg-base-200 p-3 rounded">
                      <textarea
                        className="textarea w-full h-32"
                        defaultValue={selectedTask.response}
                        id="edited-response"
                      />
                    </div>
                  </div>
                  
                  <div>
                    <label className="label">
                      <span className="label-text font-semibold">Review Feedback</span>
                    </label>
                    <textarea
                      className="textarea textarea-bordered w-full"
                      placeholder="Provide feedback for the agent..."
                      id="feedback"
                    />
                  </div>
                  
                  {/* Quick Actions */}
                  <div className="flex gap-2">
                    <button className="btn btn-sm btn-outline">
                      Request More Context
                    </button>
                    <button className="btn btn-sm btn-outline">
                      View History
                    </button>
                    <button className="btn btn-sm btn-outline">
                      Escalate
                    </button>
                  </div>
                </div>
                
                {/* Action Buttons */}
                <div className="card-actions justify-end mt-6">
                  <button 
                    className="btn btn-error"
                    onClick={() => {
                      reviewMutation.mutate({
                        taskId: selectedTask.id,
                        approved: false,
                        feedback: document.getElementById("feedback").value
                      });
                    }}
                  >
                    Reject
                  </button>
                  <button 
                    className="btn btn-warning"
                    onClick={() => {
                      reviewMutation.mutate({
                        taskId: selectedTask.id,
                        approved: true,
                        editedResponse: document.getElementById("edited-response").value,
                        feedback: document.getElementById("feedback").value
                      });
                    }}
                  >
                    Approve with Edits
                  </button>
                  <button 
                    className="btn btn-success"
                    onClick={() => {
                      reviewMutation.mutate({
                        taskId: selectedTask.id,
                        approved: true
                      });
                    }}
                  >
                    Approve
                  </button>
                </div>
              </div>
            </div>
          ) : (
            <div className="card bg-base-100 shadow">
              <div className="card-body">
                <div className="text-center py-8">
                  <p className="text-base-content/60">
                    Select a task to review
                  </p>
                </div>
              </div>
            </div>
          )}
        </div>
      </div>
      
      {/* Statistics */}
      <div className="stats shadow mt-6">
        <div className="stat">
          <div className="stat-title">Total Pending</div>
          <div className="stat-value">{tasks.length}</div>
        </div>
        <div className="stat">
          <div className="stat-title">Critical</div>
          <div className="stat-value text-error">
            {groupedTasks.critical?.length || 0}
          </div>
        </div>
        <div className="stat">
          <div className="stat-title">Avg Response Time</div>
          <div className="stat-value">2.3m</div>
        </div>
        <div className="stat">
          <div className="stat-title">Approval Rate</div>
          <div className="stat-value">87%</div>
        </div>
      </div>
    </div>
  );
}

Comprehensive dashboard for managing HITL tasks with priority filtering, inline editing, and performance metrics.

Conclusion

Human-in-the-Loop patterns transform autonomous agents into collaborative systems that leverage both AI efficiency and human judgment. The key to successful HITL implementation in serverless environments lies in managing state persistence, handling asynchronous workflows, and providing intuitive interfaces for human reviewers. By combining LangGraph's interrupt capabilities with Vercel's serverless infrastructure and modern UI patterns, we create systems that are both powerful and trustworthy.

The patterns shown here - from basic approval workflows to sophisticated multi-level escalation systems - demonstrate that HITL isn't a limitation but an enhancement. It enables deployment of AI agents in critical domains where pure automation would be irresponsible, while maintaining the scalability and cost-efficiency of serverless architectures. As AI capabilities continue to advance, HITL remains essential for ensuring systems operate within ethical boundaries and organizational policies.