DRAFT Agentic Design Patterns - Tool Use

aiagentslangchainlanggraphtool-usefunction-callingtypescript
By sko X opus 4.19/21/202513 min read

Tool Use (Function Calling) enables AI agents to interact with external systems, APIs, and databases, transforming them from passive responders to active executors. This guide demonstrates implementing production-ready Tool Use patterns with LangChain, LangGraph, and Next.js 15 on Vercel's serverless platform.

Mental Model: AI as an Operating System

Think of Tool Use like system calls in an operating system. Just as programs use syscalls to interact with hardware and resources, AI agents use tools to interact with external systems. The LLM acts as the "kernel" that decides which tools to invoke based on the user's intent. LangChain provides the "driver layer" that standardizes tool interfaces, while LangGraph manages the "process scheduling" for complex multi-tool workflows. This architecture enables agents to perform actions like database queries, API calls, and calculations - similar to how an OS orchestrates file I/O, network requests, and CPU operations.

Basic Tool Implementation

1. Define Tools with Zod Schemas

// lib/tools/weather-tool.ts
import { z } from 'zod';
import { tool } from '@langchain/core/tools';
import { get } from 'es-toolkit/object';
import { isString } from 'es-toolkit/predicate';

export const weatherTool = tool(
  async ({ location, unit = 'celsius' }) => {
    // Simulate API call - replace with real weather API
    const response = await fetch(
      `https://api.weather.example/current?q=${location}&units=${unit}`
    );
    
    const data = await response.json();
    
    // Use es-toolkit for safe data access
    const temperature = get(data, 'main.temp');
    const description = get(data, 'weather[0].description');
    
    if (!isString(description)) {
      throw new Error('Invalid weather data received');
    }
    
    return `Current weather in ${location}: ${temperature}°${
      unit === 'celsius' ? 'C' : 'F'
    }, ${description}`;
  },
  {
    name: 'get_weather',
    description: 'Get current weather for a location',
    schema: z.object({
      location: z.string().describe('City name or location'),
      unit: z.enum(['celsius', 'fahrenheit']).optional(),
    }),
  }
);

Defines a weather tool with Zod schema validation, ensuring type safety for both input parameters and TypeScript integration.

2. Create Calculator Tool

// lib/tools/calculator-tool.ts
import { z } from 'zod';
import { tool } from '@langchain/core/tools';
import { evaluate } from 'es-toolkit/math';
import { attempt } from 'es-toolkit/function';

export const calculatorTool = tool(
  async ({ expression }) => {
    // Use es-toolkit's attempt for safe evaluation
    const result = attempt(() => {
      // For production, use a proper math parser like mathjs
      return evaluate(expression);
    });
    
    if (result instanceof Error) {
      return `Error: Invalid expression "${expression}"`;
    }
    
    return `Result: ${result}`;
  },
  {
    name: 'calculator',
    description: 'Perform mathematical calculations',
    schema: z.object({
      expression: z.string().describe('Mathematical expression to evaluate'),
    }),
  }
);

Implements a calculator tool with error handling using es-toolkit's attempt pattern for safe execution.

3. Database Query Tool

// lib/tools/database-tool.ts
import { z } from 'zod';
import { tool } from '@langchain/core/tools';
import { sql } from '@vercel/postgres';
import { mapValues, pick } from 'es-toolkit/object';
import { chunk } from 'es-toolkit/array';

export const databaseTool = tool(
  async ({ query, parameters = {} }) => {
    try {
      // Sanitize and validate query
      if (query.toLowerCase().includes('drop') || 
          query.toLowerCase().includes('delete')) {
        return 'Error: Destructive operations not allowed';
      }
      
      // Execute query with parameters
      const result = await sql.query(query, Object.values(parameters));
      
      // Process results in chunks for large datasets
      const rows = result.rows;
      const chunks = chunk(rows, 100);
      
      // Return first chunk for serverless memory limits
      return JSON.stringify({
        rowCount: result.rowCount,
        data: chunks[0] || [],
        hasMore: chunks.length > 1,
      });
    } catch (error) {
      return `Database error: ${error.message}`;
    }
  },
  {
    name: 'query_database',
    description: 'Execute safe SELECT queries on the database',
    schema: z.object({
      query: z.string().describe('SQL SELECT query'),
      parameters: z.record(z.any()).optional().describe('Query parameters'),
    }),
  }
);

Creates a database tool with safety checks and chunked result processing optimized for serverless memory constraints.

4. Bind Tools to LLM

// lib/agents/tool-agent.ts
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
import { weatherTool } from '@/lib/tools/weather-tool';
import { calculatorTool } from '@/lib/tools/calculator-tool';
import { databaseTool } from '@/lib/tools/database-tool';
import { pipe } from 'es-toolkit/function';

export function createToolAgent() {
  const model = new ChatGoogleGenerativeAI({
    modelName: 'gemini-2.5-pro',
    temperature: 0,
    streaming: true,
  });
  
  const tools = [weatherTool, calculatorTool, databaseTool];
  
  // Bind tools to the model
  const modelWithTools = model.bindTools(tools);
  
  return {
    model: modelWithTools,
    tools,
    // Helper to format tool calls
    formatToolCall: pipe(
      (call: any) => call.args,
      JSON.stringify
    ),
  };
}

Binds multiple tools to an LLM, enabling it to decide which tools to use based on user queries.

5. API Route with Tool Execution

// app/api/tools/route.ts
import { NextResponse } from 'next/server';
import { createToolAgent } from '@/lib/agents/tool-agent';
import { HumanMessage } from '@langchain/core/messages';
import { debounce } from 'es-toolkit/function';

export const runtime = 'nodejs';
export const maxDuration = 777; // Safety buffer for 800s limit

// Debounce concurrent requests
const processRequest = debounce(async (message: string) => {
  const { model, tools } = createToolAgent();
  
  // Get LLM response with tool calls
  const response = await model.invoke([
    new HumanMessage(message)
  ]);
  
  // Execute tool calls if present
  if (response.tool_calls && response.tool_calls.length > 0) {
    const toolResults = await Promise.all(
      response.tool_calls.map(async (toolCall) => {
        const tool = tools.find(t => t.name === toolCall.name);
        if (!tool) return null;
        
        const result = await tool.invoke(toolCall.args);
        return {
          tool: toolCall.name,
          result,
        };
      })
    );
    
    return { toolResults, message: response.content };
  }
  
  return { message: response.content };
}, 100);

export async function POST(req: Request) {
  try {
    const { message } = await req.json();
    const result = await processRequest(message);
    
    return NextResponse.json(result);
  } catch (error) {
    return NextResponse.json(
      { error: 'Tool execution failed' },
      { status: 500 }
    );
  }
}

Implements serverless API route that processes messages, executes tool calls, and returns results with debouncing for rate limiting.

Advanced Tool Orchestration

1. Parallel Tool Execution

// lib/orchestration/parallel-tools.ts
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
import { RunnableParallel } from '@langchain/core/runnables';
import { groupBy } from 'es-toolkit/array';
import { mapValues } from 'es-toolkit/object';

export class ParallelToolOrchestrator {
  private model: ChatGoogleGenerativeAI;
  private tools: Map<string, any>;
  
  constructor(tools: any[]) {
    this.model = new ChatGoogleGenerativeAI({
      modelName: 'gemini-2.5-pro',
      temperature: 0,
    });
    
    this.tools = new Map(tools.map(t => [t.name, t]));
    this.model = this.model.bindTools(tools);
  }
  
  async executeParallel(message: string) {
    // Get tool calls from LLM
    const response = await this.model.invoke([
      { role: 'user', content: message }
    ]);
    
    if (!response.tool_calls?.length) {
      return { message: response.content };
    }
    
    // Group tools by dependency (independent tools can run in parallel)
    const toolGroups = this.groupIndependentTools(response.tool_calls);
    
    // Execute each group in parallel
    const results = [];
    for (const group of toolGroups) {
      const groupResults = await Promise.all(
        group.map(async (call) => {
          const tool = this.tools.get(call.name);
          if (!tool) return null;
          
          const startTime = Date.now();
          const result = await tool.invoke(call.args);
          const duration = Date.now() - startTime;
          
          return {
            tool: call.name,
            result,
            duration,
            timestamp: new Date().toISOString(),
          };
        })
      );
      results.push(...groupResults.filter(Boolean));
    }
    
    return {
      message: response.content,
      toolResults: results,
      parallelGroups: toolGroups.length,
      totalDuration: Math.max(...results.map(r => r.duration)),
    };
  }
  
  private groupIndependentTools(toolCalls: any[]) {
    // Simple heuristic: tools with no shared parameters can run in parallel
    const groups: any[][] = [];
    const used = new Set<number>();
    
    toolCalls.forEach((call, i) => {
      if (used.has(i)) return;
      
      const group = [call];
      used.add(i);
      
      // Find other independent tools
      toolCalls.forEach((other, j) => {
        if (i !== j && !used.has(j) && this.areIndependent(call, other)) {
          group.push(other);
          used.add(j);
        }
      });
      
      groups.push(group);
    });
    
    return groups;
  }
  
  private areIndependent(tool1: any, tool2: any): boolean {
    // Tools are independent if they don't share data dependencies
    const args1 = JSON.stringify(tool1.args);
    const args2 = JSON.stringify(tool2.args);
    
    // Simple check: no shared values
    return !Object.values(tool1.args).some(v => 
      args2.includes(String(v))
    );
  }
}

Implements parallel tool execution with dependency analysis to maximize throughput in serverless environments.

2. Tool Chain with Error Recovery

// lib/orchestration/tool-chain.ts
import { StateGraph, END } from '@langchain/langgraph';
import { BaseMessage } from '@langchain/core/messages';
import { retry, delay } from 'es-toolkit/promise';
import { pipe } from 'es-toolkit/function';

interface ChainState {
  messages: BaseMessage[];
  toolResults: any[];
  errors: any[];
  retryCount: number;
}

export function createToolChain(tools: any[]) {
  const graph = new StateGraph<ChainState>({
    channels: {
      messages: {
        value: (x, y) => [...x, ...y],
        default: () => [],
      },
      toolResults: {
        value: (x, y) => [...x, ...y],
        default: () => [],
      },
      errors: {
        value: (x, y) => [...x, ...y],
        default: () => [],
      },
      retryCount: {
        value: (x, y) => y,
        default: () => 0,
      },
    },
  });
  
  // Execute tool with retry logic
  graph.addNode('execute_tool', async (state) => {
    const currentTool = tools[state.toolResults.length];
    if (!currentTool) {
      return { ...state };
    }
    
    try {
      const result = await retry(
        async () => {
          const res = await currentTool.invoke(
            state.messages[state.messages.length - 1]
          );
          return res;
        },
        {
          times: 3,
          delay: 1000,
          onRetry: (error, attempt) => {
            console.log(`Retry ${attempt} for ${currentTool.name}:`, error);
          },
        }
      );
      
      return {
        toolResults: [{ 
          tool: currentTool.name, 
          result,
          success: true 
        }],
      };
    } catch (error) {
      return {
        errors: [{ 
          tool: currentTool.name, 
          error: error.message 
        }],
        retryCount: state.retryCount + 1,
      };
    }
  });
  
  // Fallback node for errors
  graph.addNode('handle_error', async (state) => {
    const lastError = state.errors[state.errors.length - 1];
    
    // Try alternative tool or graceful degradation
    const fallbackResult = {
      tool: 'fallback',
      result: `Failed to execute ${lastError.tool}. Using cached result.`,
      fallback: true,
    };
    
    return {
      toolResults: [fallbackResult],
    };
  });
  
  // Decision node
  graph.addNode('check_status', async (state) => {
    if (state.errors.length > 0 && state.retryCount < 3) {
      return 'handle_error';
    }
    if (state.toolResults.length < tools.length) {
      return 'execute_tool';
    }
    return END;
  });
  
  // Define edges
  graph.setEntryPoint('execute_tool');
  graph.addEdge('execute_tool', 'check_status');
  graph.addEdge('handle_error', 'execute_tool');
  graph.addConditionalEdges('check_status', async (state) => {
    if (state.errors.length > 0 && state.retryCount < 3) {
      return 'handle_error';
    }
    if (state.toolResults.length < tools.length) {
      return 'execute_tool';
    }
    return END;
  });
  
  return graph.compile();
}

Creates a resilient tool chain with automatic retry logic and fallback mechanisms for production reliability.

3. Dynamic Tool Selection

// lib/orchestration/dynamic-tools.ts
import { ChatGoogleGenerativeAI } from '@langchain/google-genai';
import { z } from 'zod';
import { tool } from '@langchain/core/tools';
import { maxBy, minBy } from 'es-toolkit/array';
import { memoize } from 'es-toolkit/function';

export class DynamicToolSelector {
  private model: ChatGoogleGenerativeAI;
  private toolRegistry: Map<string, any>;
  private performanceMetrics: Map<string, any>;
  
  constructor() {
    this.model = new ChatGoogleGenerativeAI({
      modelName: 'gemini-2.5-flash',
      temperature: 0,
    });
    this.toolRegistry = new Map();
    this.performanceMetrics = new Map();
  }
  
  // Register tool with metadata
  registerTool(toolDef: any, metadata: {
    cost: number;
    latency: number;
    reliability: number;
    capabilities: string[];
  }) {
    this.toolRegistry.set(toolDef.name, {
      tool: toolDef,
      metadata,
    });
  }
  
  // Memoized tool selection for performance
  private selectOptimalTool = memoize(
    async (requirement: string, constraints: any) => {
      const availableTools = Array.from(this.toolRegistry.values());
      
      // Score each tool based on requirements
      const scoredTools = await Promise.all(
        availableTools.map(async (entry) => {
          const score = this.calculateToolScore(
            entry,
            requirement,
            constraints
          );
          return { ...entry, score };
        })
      );
      
      // Select best tool
      const bestTool = maxBy(scoredTools, t => t.score);
      return bestTool?.tool;
    },
    {
      getCacheKey: (req, cons) => `${req}-${JSON.stringify(cons)}`,
    }
  );
  
  private calculateToolScore(
    entry: any,
    requirement: string,
    constraints: any
  ): number {
    const { metadata } = entry;
    let score = 0;
    
    // Match capabilities
    const capabilityMatch = metadata.capabilities.some((cap: string) =>
      requirement.toLowerCase().includes(cap.toLowerCase())
    );
    if (capabilityMatch) score += 40;
    
    // Consider constraints
    if (constraints.maxLatency && metadata.latency <= constraints.maxLatency) {
      score += 20;
    }
    if (constraints.maxCost && metadata.cost <= constraints.maxCost) {
      score += 20;
    }
    
    // Reliability bonus
    score += metadata.reliability * 20;
    
    // Historical performance
    const metrics = this.performanceMetrics.get(entry.tool.name);
    if (metrics?.successRate) {
      score += metrics.successRate * 10;
    }
    
    return score;
  }
  
  async executeDynamic(
    requirement: string,
    constraints: {
      maxLatency?: number;
      maxCost?: number;
      preferredTools?: string[];
    } = {}
  ) {
    // Select optimal tool
    const selectedTool = await this.selectOptimalTool(
      requirement,
      constraints
    );
    
    if (!selectedTool) {
      throw new Error('No suitable tool found for requirement');
    }
    
    const startTime = Date.now();
    try {
      // Execute selected tool
      const result = await selectedTool.invoke({ query: requirement });
      
      // Update performance metrics
      this.updateMetrics(selectedTool.name, {
        success: true,
        latency: Date.now() - startTime,
      });
      
      return {
        tool: selectedTool.name,
        result,
        metrics: {
          latency: Date.now() - startTime,
          cost: this.toolRegistry.get(selectedTool.name)?.metadata.cost,
        },
      };
    } catch (error) {
      // Update failure metrics
      this.updateMetrics(selectedTool.name, {
        success: false,
        error: error.message,
      });
      
      throw error;
    }
  }
  
  private updateMetrics(toolName: string, metrics: any) {
    const existing = this.performanceMetrics.get(toolName) || {
      totalCalls: 0,
      successfulCalls: 0,
      totalLatency: 0,
    };
    
    existing.totalCalls++;
    if (metrics.success) {
      existing.successfulCalls++;
      existing.totalLatency += metrics.latency;
    }
    
    existing.successRate = existing.successfulCalls / existing.totalCalls;
    existing.avgLatency = existing.totalLatency / existing.successfulCalls;
    
    this.performanceMetrics.set(toolName, existing);
  }
}

Implements dynamic tool selection based on requirements, constraints, and historical performance metrics.

4. Stream Tool Results

// app/api/stream-tools/route.ts
import { createToolAgent } from '@/lib/agents/tool-agent';
import { ParallelToolOrchestrator } from '@/lib/orchestration/parallel-tools';

export const runtime = 'nodejs';
export const maxDuration = 777;

export async function POST(req: Request) {
  const { message } = await req.json();
  
  const encoder = new TextEncoder();
  const stream = new TransformStream();
  const writer = stream.writable.getWriter();
  
  const { model, tools } = createToolAgent();
  const orchestrator = new ParallelToolOrchestrator(tools);
  
  // Process in background
  (async () => {
    try {
      // Stream initial thinking
      await writer.write(
        encoder.encode(`data: ${JSON.stringify({
          type: 'thinking',
          content: 'Analyzing request and selecting tools...'
        })}\n\n`)
      );
      
      // Get tool calls from LLM
      const response = await model.invoke([
        { role: 'user', content: message }
      ]);
      
      if (response.tool_calls) {
        // Stream tool execution updates
        for (const call of response.tool_calls) {
          await writer.write(
            encoder.encode(`data: ${JSON.stringify({
              type: 'tool_start',
              tool: call.name,
              args: call.args
            })}\n\n`)
          );
          
          const tool = tools.find(t => t.name === call.name);
          if (tool) {
            const result = await tool.invoke(call.args);
            
            await writer.write(
              encoder.encode(`data: ${JSON.stringify({
                type: 'tool_complete',
                tool: call.name,
                result
              })}\n\n`)
            );
          }
        }
      }
      
      // Stream final response
      await writer.write(
        encoder.encode(`data: ${JSON.stringify({
          type: 'complete',
          content: response.content
        })}\n\n`)
      );
      
    } catch (error) {
      await writer.write(
        encoder.encode(`data: ${JSON.stringify({
          type: 'error',
          error: error.message
        })}\n\n`)
      );
    } finally {
      await writer.close();
    }
  })();
  
  return new Response(stream.readable, {
    headers: {
      'Content-Type': 'text/event-stream',
      'Cache-Control': 'no-cache',
      'Connection': 'keep-alive',
    },
  });
}

Implements server-sent events to stream tool execution progress in real-time.

5. Frontend Integration with TanStack Query

// components/ToolInterface.tsx
'use client';

import { useMutation } from '@tanstack/react-query';
import { useState, useCallback } from 'react';
import { debounce } from 'es-toolkit/function';

interface ToolResult {
  tool: string;
  result: any;
  duration?: number;
}

export default function ToolInterface() {
  const [input, setInput] = useState('');
  const [streamedResults, setStreamedResults] = useState<ToolResult[]>([]);
  const [isStreaming, setIsStreaming] = useState(false);
  
  const streamTools = useCallback(
    debounce(async (message: string) => {
      setIsStreaming(true);
      setStreamedResults([]);
      
      const eventSource = new EventSource(
        `/api/stream-tools?message=${encodeURIComponent(message)}`
      );
      
      eventSource.onmessage = (event) => {
        const data = JSON.parse(event.data);
        
        switch (data.type) {
          case 'tool_start':
            setStreamedResults(prev => [...prev, {
              tool: data.tool,
              result: 'Executing...',
            }]);
            break;
            
          case 'tool_complete':
            setStreamedResults(prev => 
              prev.map(r => 
                r.tool === data.tool 
                  ? { ...r, result: data.result }
                  : r
              )
            );
            break;
            
          case 'complete':
            setIsStreaming(false);
            eventSource.close();
            break;
            
          case 'error':
            console.error('Stream error:', data.error);
            setIsStreaming(false);
            eventSource.close();
            break;
        }
      };
      
      return () => eventSource.close();
    }, 300),
    []
  );
  
  const executeTool = useMutation({
    mutationFn: async (message: string) => {
      const response = await fetch('/api/tools', {
        method: 'POST',
        headers: { 'Content-Type': 'application/json' },
        body: JSON.stringify({ message }),
      });
      return response.json();
    },
  });
  
  const handleSubmit = (e: React.FormEvent) => {
    e.preventDefault();
    if (input.trim()) {
      streamTools(input);
    }
  };
  
  return (
    <div className="card bg-base-100 shadow-xl">
      <div className="card-body">
        <h2 className="card-title">AI Tool Executor</h2>
        
        <form onSubmit={handleSubmit}>
          <div className="form-control">
            <label className="label">
              <span className="label-text">What would you like me to do?</span>
            </label>
            <textarea
              className="textarea textarea-bordered h-24"
              placeholder="Calculate the square root of 144, then check weather in Tokyo"
              value={input}
              onChange={(e) => setInput(e.target.value)}
            />
          </div>
          
          <div className="form-control mt-4">
            <button 
              type="submit" 
              className="btn btn-primary"
              disabled={isStreaming || !input.trim()}
            >
              {isStreaming ? (
                <>
                  <span className="loading loading-spinner"></span>
                  Executing Tools...
                </>
              ) : 'Execute'}
            </button>
          </div>
        </form>
        
        {streamedResults.length > 0 && (
          <div className="mt-6">
            <h3 className="text-lg font-semibold mb-3">Tool Results:</h3>
            <div className="space-y-3">
              {streamedResults.map((result, idx) => (
                <div key={idx} className="alert alert-info">
                  <div>
                    <span className="font-bold">{result.tool}:</span>
                    <pre className="mt-2 text-sm">{
                      typeof result.result === 'object' 
                        ? JSON.stringify(result.result, null, 2)
                        : result.result
                    }</pre>
                  </div>
                </div>
              ))}
            </div>
          </div>
        )}
      </div>
    </div>
  );
}

React component with real-time streaming of tool execution results using server-sent events.

6. Tool Monitoring Dashboard

// app/tools/dashboard/page.tsx
'use client';

import { useQuery } from '@tanstack/react-query';
import { groupBy } from 'es-toolkit/array';
import { useState, useEffect } from 'react';

interface ToolMetrics {
  name: string;
  calls: number;
  avgLatency: number;
  successRate: number;
  lastUsed: string;
}

export default function ToolDashboard() {
  const [metrics, setMetrics] = useState<ToolMetrics[]>([]);
  
  const { data: liveMetrics } = useQuery({
    queryKey: ['tool-metrics'],
    queryFn: async () => {
      const res = await fetch('/api/tools/metrics');
      return res.json();
    },
    refetchInterval: 5000, // Poll every 5 seconds
  });
  
  useEffect(() => {
    if (liveMetrics) {
      setMetrics(liveMetrics);
    }
  }, [liveMetrics]);
  
  const totalCalls = metrics.reduce((sum, m) => sum + m.calls, 0);
  const avgSuccessRate = metrics.length > 0
    ? metrics.reduce((sum, m) => sum + m.successRate, 0) / metrics.length
    : 0;
  
  return (
    <div className="container mx-auto p-6">
      <h1 className="text-4xl font-bold mb-8">Tool Monitoring Dashboard</h1>
      
      <div className="stats shadow mb-8">
        <div className="stat">
          <div className="stat-title">Total Tool Calls</div>
          <div className="stat-value">{totalCalls}</div>
          <div className="stat-desc">Across all tools</div>
        </div>
        
        <div className="stat">
          <div className="stat-title">Average Success Rate</div>
          <div className="stat-value">{(avgSuccessRate * 100).toFixed(1)}%</div>
          <div className="stat-desc">System reliability</div>
        </div>
        
        <div className="stat">
          <div className="stat-title">Active Tools</div>
          <div className="stat-value">{metrics.length}</div>
          <div className="stat-desc">Currently registered</div>
        </div>
      </div>
      
      <div className="overflow-x-auto">
        <table className="table table-zebra w-full">
          <thead>
            <tr>
              <th>Tool Name</th>
              <th>Calls</th>
              <th>Avg Latency</th>
              <th>Success Rate</th>
              <th>Last Used</th>
              <th>Status</th>
            </tr>
          </thead>
          <tbody>
            {metrics.map((metric) => (
              <tr key={metric.name}>
                <td className="font-medium">{metric.name}</td>
                <td>{metric.calls}</td>
                <td>{metric.avgLatency.toFixed(0)}ms</td>
                <td>
                  <div className="flex items-center gap-2">
                    <progress 
                      className="progress progress-success w-20" 
                      value={metric.successRate * 100} 
                      max="100"
                    />
                    <span>{(metric.successRate * 100).toFixed(1)}%</span>
                  </div>
                </td>
                <td>{new Date(metric.lastUsed).toLocaleString()}</td>
                <td>
                  <div className={`badge ${
                    metric.successRate > 0.95 ? 'badge-success' :
                    metric.successRate > 0.8 ? 'badge-warning' :
                    'badge-error'
                  }`}>
                    {metric.successRate > 0.95 ? 'Healthy' :
                     metric.successRate > 0.8 ? 'Degraded' :
                     'Critical'}
                  </div>
                </td>
              </tr>
            ))}
          </tbody>
        </table>
      </div>
    </div>
  );
}

Real-time monitoring dashboard for tracking tool performance, success rates, and system health.

Conclusion

Tool Use patterns transform AI agents from passive responders to active executors capable of interacting with external systems. This implementation demonstrates production-ready patterns including parallel execution, error recovery, dynamic selection, and real-time streaming - all optimized for Vercel's serverless platform with a 777-second safety buffer. The combination of LangChain's standardized tool interfaces, LangGraph's orchestration capabilities, and es-toolkit's utility functions creates a robust foundation for building sophisticated AI agents that can reliably execute complex, multi-step workflows in production environments.