DRAFT Agentic Design Patterns - Reasoning Techniques
Learn how to implement advanced reasoning techniques including Chain-of-Thought (CoT), Tree-of-Thought (ToT), Graph-of-Thought (GoT), and Self-Reflection patterns using LangChain, LangGraph, and TypeScript in serverless environments.
Mental Model: The Symphony Orchestra
Think of reasoning techniques as a symphony orchestra where different sections (reasoning patterns) work together to create complex harmonies (solutions). The conductor (LangGraph) coordinates multiple instruments (agents) - strings handle linear reasoning (CoT), woodwinds explore variations (ToT), brass provides feedback (Self-Reflection), and percussion maintains rhythm (state management). Just as musicians can play solo or in ensemble, agents can reason independently or collaborate for richer outcomes.
Basic Example: Chain-of-Thought Reasoning
1. Setup the Basic CoT Agent
// app/agents/cot-agent/route.ts
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { StateGraph, Annotation } from "@langchain/langgraph";
import { HumanMessage, AIMessage, BaseMessage } from "@langchain/core/messages";
import { z } from "zod";
import { map, reduce } from "es-toolkit";
// Define the reasoning state
const ReasoningState = Annotation.Root({
messages: Annotation<BaseMessage[]>({
reducer: (x, y) => x.concat(y),
default: () => [],
}),
reasoning_steps: Annotation<string[]>({
reducer: (x, y) => x.concat(y),
default: () => [],
}),
final_answer: Annotation<string>({
reducer: (_, y) => y,
default: () => "",
}),
});
// Create the reasoning chain
const model = new ChatGoogleGenerativeAI({
modelName: "gemini-pro",
temperature: 0.7,
});
// CoT reasoning node
async function reasoningNode(state: typeof ReasoningState.State) {
const cotPrompt = `You are a reasoning agent. Break down the problem step by step.
Problem: ${state.messages[state.messages.length - 1].content}
Think through this step-by-step:
1. First, identify what we're trying to solve
2. Break it into smaller parts
3. Solve each part
4. Combine the solutions
Format your response as:
STEP 1: [reasoning]
STEP 2: [reasoning]
...
FINAL ANSWER: [answer]`;
const response = await model.invoke([
new HumanMessage(cotPrompt)
]);
// Parse the response
const content = response.content as string;
const steps = content.match(/STEP \d+: (.+)/g) || [];
const finalAnswer = content.match(/FINAL ANSWER: (.+)/)?.[1] || "";
return {
messages: [response],
reasoning_steps: steps,
final_answer: finalAnswer,
};
}
// Build the graph
const workflow = new StateGraph(ReasoningState)
.addNode("reason", reasoningNode)
.addEdge("__start__", "reason")
.addEdge("reason", "__end__");
const app = workflow.compile();
// API Route Handler
export async function POST(request: Request) {
const { question } = await request.json();
const result = await app.invoke({
messages: [new HumanMessage(question)],
});
return Response.json({
reasoning_steps: result.reasoning_steps,
final_answer: result.final_answer,
});
}
The basic CoT agent breaks down problems into sequential reasoning steps. Each step builds on the previous one, creating a logical chain of thought.
2. Frontend Integration with React Query
// app/components/ReasoningAgent.tsx
'use client';
import { useMutation } from '@tanstack/react-query';
import { useState } from 'react';
import { map } from 'es-toolkit';
interface ReasoningResponse {
reasoning_steps: string[];
final_answer: string;
}
async function queryReasoning(question: string): Promise<ReasoningResponse> {
const response = await fetch('/agents/cot-agent', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ question }),
});
return response.json();
}
export function ReasoningAgent() {
const [question, setQuestion] = useState('');
const mutation = useMutation({
mutationFn: queryReasoning,
});
return (
<div className="card bg-base-100 shadow-xl">
<div className="card-body">
<h2 className="card-title">Chain-of-Thought Reasoning</h2>
<textarea
className="textarea textarea-bordered"
placeholder="Ask a complex question..."
value={question}
onChange={(e) => setQuestion(e.target.value)}
/>
<button
className="btn btn-primary"
onClick={() => mutation.mutate(question)}
disabled={mutation.isPending}
>
{mutation.isPending ? 'Reasoning...' : 'Start Reasoning'}
</button>
{mutation.isSuccess && (
<div className="mt-4 space-y-2">
<div className="text-sm opacity-70">Reasoning Steps:</div>
{map(mutation.data.reasoning_steps, (step, idx) => (
<div key={idx} className="p-2 bg-base-200 rounded">
{step}
</div>
))}
<div className="divider"></div>
<div className="alert alert-success">
<span>{mutation.data.final_answer}</span>
</div>
</div>
)}
</div>
</div>
);
}
React Query handles the async state management while the UI progressively displays reasoning steps.
Advanced Example: Multi-Agent Reasoning with ToT and Self-Reflection
1. Tree-of-Thought with Parallel Exploration
// app/agents/advanced-reasoning/tree-of-thought.ts
import { StateGraph, Annotation, Send } from "@langchain/langgraph";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { BaseMessage, HumanMessage, AIMessage } from "@langchain/core/messages";
import { z } from "zod";
import { maxBy, sortBy, filter, map, chunk } from "es-toolkit";
import { RunnableConfig } from "@langchain/core/runnables";
// Define thought node structure
const ThoughtSchema = z.object({
id: z.string(),
content: z.string(),
score: z.number(),
parent_id: z.string().optional(),
depth: z.number(),
});
type Thought = z.infer<typeof ThoughtSchema>;
// State definition with thoughts tree
const ToTState = Annotation.Root({
problem: Annotation<string>(),
thoughts: Annotation<Thought[]>({
reducer: (x, y) => [...x, ...y],
default: () => [],
}),
current_depth: Annotation<number>({
reducer: (_, y) => y,
default: () => 0,
}),
best_solution: Annotation<string>({
reducer: (_, y) => y,
default: () => "",
}),
exploration_paths: Annotation<string[][]>({
reducer: (x, y) => [...x, ...y],
default: () => [],
}),
});
const model = new ChatGoogleGenerativeAI({
modelName: "gemini-pro",
temperature: 0.8,
});
// Generate multiple thoughts in parallel
async function generateThoughts(
state: typeof ToTState.State,
config: RunnableConfig
): Promise<Partial<typeof ToTState.State>> {
const parentThoughts = filter(
state.thoughts,
(t) => t.depth === state.current_depth - 1
);
// If no parent thoughts, use the original problem
const prompts = parentThoughts.length > 0
? map(parentThoughts, (parent) => ({
parent_id: parent.id,
prompt: `Given this reasoning step: "${parent.content}"
Generate 3 different ways to proceed with solving: ${state.problem}
Be creative and explore different approaches.`,
}))
: [{
parent_id: "root",
prompt: `Problem: ${state.problem}
Generate 3 different initial approaches to solve this problem.`,
}];
// Generate thoughts in parallel
const thoughtPromises = map(prompts, async ({ parent_id, prompt }) => {
const response = await model.invoke([new HumanMessage(prompt)]);
const content = response.content as string;
// Parse multiple thoughts from response
const thoughtTexts = content.split('\n').filter(line => line.trim());
return map(thoughtTexts.slice(0, 3), (text, idx) => ({
id: `${parent_id}-${state.current_depth}-${idx}`,
content: text,
score: 0,
parent_id,
depth: state.current_depth,
}));
});
const allThoughts = (await Promise.all(thoughtPromises)).flat();
return {
thoughts: allThoughts,
};
}
// Score thoughts based on quality
async function scoreThoughts(
state: typeof ToTState.State
): Promise<Partial<typeof ToTState.State>> {
const currentThoughts = filter(
state.thoughts,
(t) => t.depth === state.current_depth && t.score === 0
);
const scoringPrompt = `Rate these solution approaches for the problem: "${state.problem}"
Score each from 0-10 based on:
- Logical coherence
- Progress toward solution
- Creativity
Approaches:
${map(currentThoughts, (t, i) => `${i + 1}. ${t.content}`).join('\n')}
Return scores as: [score1, score2, ...]`;
const response = await model.invoke([new HumanMessage(scoringPrompt)]);
const scores = JSON.parse((response.content as string).match(/\[[\d,\s]+\]/)?.[0] || "[]");
const scoredThoughts = map(currentThoughts, (thought, idx) => ({
...thought,
score: scores[idx] || 0,
}));
// Update only the scored thoughts
const updatedThoughts = map(state.thoughts, (t) => {
const scored = scoredThoughts.find(st => st.id === t.id);
return scored || t;
});
return {
thoughts: updatedThoughts,
};
}
// Prune low-scoring branches
async function pruneThoughts(
state: typeof ToTState.State
): Promise<Partial<typeof ToTState.State>> {
const currentThoughts = filter(
state.thoughts,
(t) => t.depth === state.current_depth
);
// Keep top 40% of thoughts
const sorted = sortBy(currentThoughts, [(t) => -t.score]);
const keepCount = Math.max(2, Math.floor(sorted.length * 0.4));
const keepIds = new Set(map(sorted.slice(0, keepCount), t => t.id));
const prunedThoughts = filter(state.thoughts, (t) =>
t.depth < state.current_depth || keepIds.has(t.id)
);
return {
thoughts: prunedThoughts,
};
}
// Check if we should continue exploration
function shouldContinue(state: typeof ToTState.State): "expand" | "synthesize" {
const MAX_DEPTH = 3;
return state.current_depth < MAX_DEPTH ? "expand" : "synthesize";
}
// Synthesize final solution from best path
async function synthesizeSolution(
state: typeof ToTState.State
): Promise<Partial<typeof ToTState.State>> {
// Find best leaf node
const leafThoughts = filter(
state.thoughts,
(t) => t.depth === state.current_depth
);
const bestLeaf = maxBy(leafThoughts, (t) => t.score);
if (!bestLeaf) {
return { best_solution: "No solution found" };
}
// Trace back to root
const path: Thought[] = [];
let current: Thought | undefined = bestLeaf;
while (current) {
path.unshift(current);
current = state.thoughts.find(t => t.id === current?.parent_id);
}
const synthesisPrompt = `Synthesize a complete solution from this reasoning path:
${map(path, (t, i) => `Step ${i + 1}: ${t.content}`).join('\n')}
Provide a clear, comprehensive answer to: ${state.problem}`;
const response = await model.invoke([new HumanMessage(synthesisPrompt)]);
return {
best_solution: response.content as string,
exploration_paths: [map(path, t => t.content)],
};
}
// Build the Tree-of-Thought workflow
export function createToTWorkflow() {
const workflow = new StateGraph(ToTState)
.addNode("generate", generateThoughts)
.addNode("score", scoreThoughts)
.addNode("prune", pruneThoughts)
.addNode("synthesize", synthesizeSolution)
.addEdge("__start__", "generate")
.addEdge("generate", "score")
.addEdge("score", "prune")
.addConditionalEdges(
"prune",
shouldContinue,
{
expand: "generate",
synthesize: "synthesize",
}
)
.addEdge("synthesize", "__end__");
return workflow.compile();
}
Tree-of-Thought explores multiple reasoning paths in parallel, scoring and pruning to find optimal solutions.
2. Self-Reflection Agent for Solution Refinement
// app/agents/advanced-reasoning/self-reflection.ts
import { StateGraph, Annotation } from "@langchain/langgraph";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { BaseMessage, HumanMessage } from "@langchain/core/messages";
import { z } from "zod";
import { reduce, filter, last } from "es-toolkit";
// Reflection state with critique history
const ReflectionState = Annotation.Root({
problem: Annotation<string>(),
current_solution: Annotation<string>({
reducer: (_, y) => y,
}),
critiques: Annotation<Array<{
critique: string;
suggestions: string[];
score: number;
}>>({
reducer: (x, y) => [...x, ...y],
default: () => [],
}),
iteration: Annotation<number>({
reducer: (_, y) => y,
default: () => 0,
}),
final_solution: Annotation<string>({
reducer: (_, y) => y,
default: () => "",
}),
});
const model = new ChatGoogleGenerativeAI({
modelName: "gemini-pro",
temperature: 0.7,
});
// Actor: Generate initial solution
async function generateSolution(
state: typeof ReflectionState.State
): Promise<Partial<typeof ReflectionState.State>> {
const lastCritique = last(state.critiques);
const prompt = lastCritique
? `Problem: ${state.problem}
Previous solution: ${state.current_solution}
Critique: ${lastCritique.critique}
Suggestions: ${lastCritique.suggestions.join(', ')}
Generate an improved solution addressing these points.`
: `Solve this problem: ${state.problem}
Provide a detailed solution with clear reasoning.`;
const response = await model.invoke([new HumanMessage(prompt)]);
return {
current_solution: response.content as string,
iteration: state.iteration + 1,
};
}
// Evaluator: Critique the solution
async function evaluateSolution(
state: typeof ReflectionState.State
): Promise<Partial<typeof ReflectionState.State>> {
const critiquePrompt = `Evaluate this solution critically:
Problem: ${state.problem}
Solution: ${state.current_solution}
Provide:
1. A detailed critique identifying weaknesses
2. Specific suggestions for improvement
3. A score from 0-10
Format as JSON: {
"critique": "...",
"suggestions": ["...", "..."],
"score": 0
}`;
const response = await model.invoke([new HumanMessage(critiquePrompt)]);
try {
const evaluation = JSON.parse(response.content as string);
return {
critiques: [evaluation],
};
} catch {
// Fallback if JSON parsing fails
return {
critiques: [{
critique: response.content as string,
suggestions: [],
score: 5,
}],
};
}
}
// Check if solution is good enough
function shouldContinueReflection(
state: typeof ReflectionState.State
): "reflect" | "finalize" {
const MAX_ITERATIONS = 3;
const GOOD_SCORE = 8;
const lastCritique = last(state.critiques);
if (state.iteration >= MAX_ITERATIONS) {
return "finalize";
}
if (lastCritique && lastCritique.score >= GOOD_SCORE) {
return "finalize";
}
return "reflect";
}
// Finalize the best solution
async function finalizeSolution(
state: typeof ReflectionState.State
): Promise<Partial<typeof ReflectionState.State>> {
return {
final_solution: state.current_solution,
};
}
// Build the self-reflection workflow
export function createReflectionWorkflow() {
const workflow = new StateGraph(ReflectionState)
.addNode("generate", generateSolution)
.addNode("evaluate", evaluateSolution)
.addNode("finalize", finalizeSolution)
.addEdge("__start__", "generate")
.addEdge("generate", "evaluate")
.addConditionalEdges(
"evaluate",
shouldContinueReflection,
{
reflect: "generate",
finalize: "finalize",
}
)
.addEdge("finalize", "__end__");
return workflow.compile();
}
Self-reflection iteratively improves solutions through critique and refinement cycles.
3. Orchestrating Multiple Reasoning Patterns
// app/agents/advanced-reasoning/orchestrator.ts
import { StateGraph, Annotation, Send } from "@langchain/langgraph";
import { createToTWorkflow } from "./tree-of-thought";
import { createReflectionWorkflow } from "./self-reflection";
import { ChatGoogleGenerativeAI } from "@langchain/google-genai";
import { HumanMessage } from "@langchain/core/messages";
import { z } from "zod";
import { maxBy, map } from "es-toolkit";
// Master orchestrator state
const OrchestratorState = Annotation.Root({
problem: Annotation<string>(),
problem_type: Annotation<string>({
reducer: (_, y) => y,
default: () => "unknown",
}),
tot_result: Annotation<{
solution: string;
paths: string[][];
}>(),
reflection_result: Annotation<{
solution: string;
iterations: number;
}>(),
final_answer: Annotation<string>({
reducer: (_, y) => y,
default: () => "",
}),
reasoning_method: Annotation<string>({
reducer: (_, y) => y,
default: () => "",
}),
});
const model = new ChatGoogleGenerativeAI({
modelName: "gemini-pro",
temperature: 0.3,
});
// Classify problem to choose reasoning strategy
async function classifyProblem(
state: typeof OrchestratorState.State
): Promise<Partial<typeof OrchestratorState.State>> {
const classificationPrompt = `Classify this problem to determine the best reasoning approach:
Problem: ${state.problem}
Categories:
- "exploratory": Problems requiring multiple solution paths (use Tree-of-Thought)
- "refinement": Problems needing iterative improvement (use Self-Reflection)
- "hybrid": Complex problems needing both exploration and refinement
Return only the category name.`;
const response = await model.invoke([new HumanMessage(classificationPrompt)]);
const problemType = (response.content as string).toLowerCase().trim();
return {
problem_type: problemType,
};
}
// Run Tree-of-Thought reasoning
async function runToT(
state: typeof OrchestratorState.State
): Promise<Partial<typeof OrchestratorState.State>> {
const totWorkflow = createToTWorkflow();
const result = await totWorkflow.invoke({
problem: state.problem,
thoughts: [],
current_depth: 0,
});
return {
tot_result: {
solution: result.best_solution,
paths: result.exploration_paths,
},
reasoning_method: "Tree-of-Thought",
};
}
// Run Self-Reflection reasoning
async function runReflection(
state: typeof OrchestratorState.State
): Promise<Partial<typeof OrchestratorState.State>> {
const reflectionWorkflow = createReflectionWorkflow();
const result = await reflectionWorkflow.invoke({
problem: state.problem,
current_solution: "",
critiques: [],
iteration: 0,
});
return {
reflection_result: {
solution: result.final_solution,
iterations: result.iteration,
},
reasoning_method: "Self-Reflection",
};
}
// Combine results from multiple reasoning methods
async function synthesizeResults(
state: typeof OrchestratorState.State
): Promise<Partial<typeof OrchestratorState.State>> {
let solutions = [];
if (state.tot_result) {
solutions.push({
method: "Tree-of-Thought",
solution: state.tot_result.solution,
});
}
if (state.reflection_result) {
solutions.push({
method: "Self-Reflection",
solution: state.reflection_result.solution,
});
}
if (solutions.length === 1) {
return {
final_answer: solutions[0].solution,
};
}
// If we have multiple solutions, synthesize them
const synthesisPrompt = `Synthesize these solutions into a comprehensive answer:
Problem: ${state.problem}
${map(solutions, s => `${s.method} Solution:\n${s.solution}`).join('\n\n')}
Create a final answer that combines the best insights from each approach.`;
const response = await model.invoke([new HumanMessage(synthesisPrompt)]);
return {
final_answer: response.content as string,
reasoning_method: "Hybrid (ToT + Self-Reflection)",
};
}
// Route based on problem type
function routeReasoning(state: typeof OrchestratorState.State): string | string[] {
switch (state.problem_type) {
case "exploratory":
return "tot";
case "refinement":
return "reflection";
case "hybrid":
return ["tot", "reflection"];
default:
return "tot"; // Default fallback
}
}
// Create the master orchestrator
export function createReasoningOrchestrator() {
const workflow = new StateGraph(OrchestratorState)
.addNode("classify", classifyProblem)
.addNode("tot", runToT)
.addNode("reflection", runReflection)
.addNode("synthesize", synthesizeResults)
.addEdge("__start__", "classify")
.addConditionalEdges(
"classify",
routeReasoning,
{
tot: "tot",
reflection: "reflection",
}
)
.addEdge("tot", "synthesize")
.addEdge("reflection", "synthesize")
.addEdge("synthesize", "__end__");
return workflow.compile();
}
// API Route Handler with streaming
export async function POST(request: Request) {
const { problem } = await request.json();
const orchestrator = createReasoningOrchestrator();
// Create streaming response
const encoder = new TextEncoder();
const stream = new ReadableStream({
async start(controller) {
try {
// Stream events as they happen
for await (const event of orchestrator.streamEvents(
{ problem },
{ version: "v2" }
)) {
if (event.event === "on_chain_stream") {
const chunk = encoder.encode(
`data: ${JSON.stringify({
type: "progress",
node: event.name,
data: event.data,
})}\n\n`
);
controller.enqueue(chunk);
}
}
// Get final result
const result = await orchestrator.invoke({ problem });
const finalChunk = encoder.encode(
`data: ${JSON.stringify({
type: "complete",
answer: result.final_answer,
method: result.reasoning_method,
tot_result: result.tot_result,
reflection_result: result.reflection_result,
})}\n\n`
);
controller.enqueue(finalChunk);
controller.close();
} catch (error) {
controller.error(error);
}
},
});
return new Response(stream, {
headers: {
'Content-Type': 'text/event-stream',
'Cache-Control': 'no-cache',
'Connection': 'keep-alive',
},
});
}
The orchestrator intelligently selects and combines reasoning strategies based on problem characteristics.
4. Frontend with Real-time Streaming
// app/components/AdvancedReasoningAgent.tsx
'use client';
import { useState, useCallback } from 'react';
import { useMutation } from '@tanstack/react-query';
import { map, groupBy } from 'es-toolkit';
interface StreamEvent {
type: 'progress' | 'complete';
node?: string;
data?: any;
answer?: string;
method?: string;
tot_result?: any;
reflection_result?: any;
}
export function AdvancedReasoningAgent() {
const [problem, setProblem] = useState('');
const [events, setEvents] = useState<StreamEvent[]>([]);
const [isStreaming, setIsStreaming] = useState(false);
const startReasoning = useCallback(async () => {
setIsStreaming(true);
setEvents([]);
try {
const response = await fetch('/agents/advanced-reasoning/orchestrator', {
method: 'POST',
headers: { 'Content-Type': 'application/json' },
body: JSON.stringify({ problem }),
});
const reader = response.body?.getReader();
const decoder = new TextDecoder();
if (!reader) throw new Error('No reader available');
while (true) {
const { done, value } = await reader.read();
if (done) break;
const chunk = decoder.decode(value);
const lines = chunk.split('\n');
for (const line of lines) {
if (line.startsWith('data: ')) {
const data = JSON.parse(line.slice(6));
setEvents(prev => [...prev, data]);
}
}
}
} catch (error) {
console.error('Streaming error:', error);
} finally {
setIsStreaming(false);
}
}, [problem]);
const completeEvent = events.find(e => e.type === 'complete');
const progressEvents = events.filter(e => e.type === 'progress');
return (
<div className="max-w-4xl mx-auto p-6 space-y-6">
<div className="card bg-base-100 shadow-xl">
<div className="card-body">
<h2 className="card-title">Advanced Multi-Agent Reasoning</h2>
<textarea
className="textarea textarea-bordered h-32"
placeholder="Enter a complex problem that requires deep reasoning..."
value={problem}
onChange={(e) => setProblem(e.target.value)}
/>
<button
className={`btn btn-primary ${isStreaming ? 'loading' : ''}`}
onClick={startReasoning}
disabled={isStreaming || !problem}
>
{isStreaming ? 'Reasoning in Progress...' : 'Start Advanced Reasoning'}
</button>
</div>
</div>
{progressEvents.length > 0 && (
<div className="card bg-base-200">
<div className="card-body">
<h3 className="font-semibold">Reasoning Progress</h3>
<div className="space-y-2">
{map(progressEvents, (event, idx) => (
<div key={idx} className="flex items-center gap-2">
<div className="badge badge-primary">{event.node}</div>
<span className="text-sm opacity-70">Processing...</span>
</div>
))}
</div>
</div>
</div>
)}
{completeEvent && (
<div className="space-y-4">
<div className="card bg-success text-success-content">
<div className="card-body">
<h3 className="card-title">Final Answer</h3>
<p className="whitespace-pre-wrap">{completeEvent.answer}</p>
<div className="card-actions justify-end">
<div className="badge badge-outline">
Method: {completeEvent.method}
</div>
</div>
</div>
</div>
{completeEvent.tot_result && (
<div className="card bg-base-100">
<div className="card-body">
<h4 className="font-semibold">Tree-of-Thought Exploration</h4>
<div className="text-sm space-y-1">
{map(completeEvent.tot_result.paths[0], (step, idx) => (
<div key={idx} className="pl-4 border-l-2 border-primary">
Step {idx + 1}: {step}
</div>
))}
</div>
</div>
</div>
)}
{completeEvent.reflection_result && (
<div className="card bg-base-100">
<div className="card-body">
<h4 className="font-semibold">Self-Reflection Process</h4>
<p className="text-sm opacity-70">
Refined through {completeEvent.reflection_result.iterations} iterations
</p>
</div>
</div>
)}
</div>
)}
</div>
);
}
The frontend provides real-time feedback on the reasoning process with progressive updates.
Performance Optimization
// app/lib/reasoning-cache.ts
import { LRUCache } from 'lru-cache';
import { createHash } from 'crypto';
// Semantic caching for reasoning results
const reasoningCache = new LRUCache<string, any>({
max: 100,
ttl: 1000 * 60 * 60, // 1 hour
updateAgeOnGet: true,
});
export function getCacheKey(problem: string, method: string): string {
return createHash('sha256')
.update(`${problem}-${method}`)
.digest('hex');
}
export async function withCache<T>(
key: string,
fn: () => Promise<T>
): Promise<T> {
const cached = reasoningCache.get(key);
if (cached) {
console.log(`Cache hit for key: ${key}`);
return cached;
}
const result = await fn();
reasoningCache.set(key, result);
return result;
}
// Parallel processing helper
export async function parallelReasoning<T>(
tasks: Array<() => Promise<T>>,
maxConcurrency: number = 3
): Promise<T[]> {
const results: T[] = [];
const executing: Promise<void>[] = [];
for (const task of tasks) {
const promise = task().then(result => {
results.push(result);
});
executing.push(promise);
if (executing.length >= maxConcurrency) {
await Promise.race(executing);
executing.splice(executing.findIndex(p => p === promise), 1);
}
}
await Promise.all(executing);
return results;
}
Caching and parallel processing significantly reduce latency and costs.
Conclusion
Reasoning techniques transform agents from simple responders to intelligent problem solvers. Start with basic Chain-of-Thought for linear problems, then progressively add Tree-of-Thought for exploration and Self-Reflection for refinement. The orchestrator pattern enables dynamic selection of reasoning strategies based on problem characteristics. Remember to implement caching, streaming, and parallel processing for production performance. These patterns, combined with LangGraph's stateful management and TypeScript's type safety, create robust reasoning systems capable of tackling complex real-world problems in serverless environments.