Agents are autonomous AI units that have their own LLM provider, can execute tools, and are themselves callable as tools. They enable sophisticated multi-agent architectures where specialized agents can be composed and orchestrated.
Agents build on the MCP Tools specification—each agent is automatically exposed as an invoke_<agent-id> tool that any MCP client can call.
Why Agents?
In the Model Context Protocol, agents serve a distinct purpose from tools, resources, and prompts:
| Aspect | Agent | Tool | Resource | Prompt |
|---|
| Purpose | Autonomous AI execution | Execute actions | Provide data | Provide templated instructions |
| Has LLM | Yes (own LLM provider) | No | No | No |
| Direction | Model or user triggers execution | Model triggers execution | Model pulls data | Model uses messages |
| Side effects | Yes (LLM calls, tool execution) | Yes (mutations, API calls) | No (read-only) | No (message generation) |
| Use case | Complex reasoning, multi-step tasks | Actions, integrations | Context loading | Conversation templates |
Agents are ideal for:
- Complex reasoning — tasks requiring multiple LLM calls and tool use
- Specialized expertise — domain-specific agents (research, writing, coding)
- Orchestration — coordinating multiple sub-agents for complex workflows
- Isolation — agents with their own tools, resources, and providers
Creating Agents
Class Style (Default Behavior)
The simplest agent requires no execute() method. The default behavior automatically:
- Runs the execution loop with the LLM
- Connects tools and executes them as needed
- Sends notifications on tool calls and output
import { Agent, AgentContext } from '@frontmcp/sdk';
import { z } from 'zod';
@Agent({
name: 'research-agent',
description: 'Researches topics and compiles summaries',
systemInstructions: 'You are a research assistant. Search for information and provide concise summaries.',
inputSchema: {
topic: z.string().describe('Topic to research'),
},
llm: {
adapter: 'openai',
model: 'gpt-4-turbo',
apiKey: { env: 'OPENAI_API_KEY' },
},
tools: [WebSearchTool, SummarizeTool],
})
class ResearchAgent extends AgentContext {} // No execute() needed!
Class Style (Custom Behavior)
Override execute() only when you need custom pre/post processing:
@Agent({
name: 'custom-agent',
description: 'Agent with custom logic',
systemInstructions: 'You are a helpful assistant.',
inputSchema: {
query: z.string(),
},
llm: {
adapter: 'openai',
model: 'gpt-4-turbo',
apiKey: { env: 'OPENAI_API_KEY' },
},
})
class CustomAgent extends AgentContext {
async execute(input: { query: string }) {
// Custom pre-processing
this.notify('Starting custom processing...', 'info');
// Call the default agent loop
const result = await super.execute(input);
// Custom post-processing
return { ...result, customField: 'added' };
}
}
Function Style
For simpler agents, use the functional builder:
import { agent } from '@frontmcp/sdk';
import { z } from 'zod';
const EchoAgent = agent({
name: 'echo-agent',
description: 'Echoes back the input message',
inputSchema: {
message: z.string(),
},
llm: {
adapter: 'openai',
model: 'gpt-3.5-turbo',
apiKey: { env: 'OPENAI_API_KEY' },
},
})(({ message }) => ({ echoed: `Echo: ${message}` }));
Registering Agents
Add agents to your app via the agents array:
import { App } from '@frontmcp/sdk';
@App({
id: 'my-app',
name: 'My Application',
agents: [ResearchAgent, CalculatorAgent, WriterAgent],
})
class MyApp {}
Each agent is automatically exposed as a tool:
invoke_research-agent
invoke_calculator-agent
invoke_writer-agent
LLM Configuration
Agents require an LLM configuration. FrontMCP uses LangChain as the standard adapter layer, providing consistent APIs across all LLM providers with built-in retry logic, streaming support, and token tracking.
Using LangChain Adapters
First, install the LangChain package for your provider:
# For OpenAI
npm install @langchain/openai
# For Anthropic
npm install @langchain/anthropic
# For Google
npm install @langchain/google-genai
# For Mistral
npm install @langchain/mistralai
OpenAI
import { ChatOpenAI } from '@langchain/openai';
import { LangChainAdapter } from '@frontmcp/sdk';
const adapter = new LangChainAdapter({
model: new ChatOpenAI({
model: 'gpt-4-turbo',
openAIApiKey: process.env.OPENAI_API_KEY,
}),
});
@Agent({
name: 'research-agent',
llm: { adapter },
// ...
})
class ResearchAgent extends AgentContext { }
Anthropic
import { ChatAnthropic } from '@langchain/anthropic';
import { LangChainAdapter } from '@frontmcp/sdk';
const adapter = new LangChainAdapter({
model: new ChatAnthropic({
model: 'claude-3-opus-20240229',
anthropicApiKey: process.env.ANTHROPIC_API_KEY,
}),
});
@Agent({
name: 'claude-agent',
llm: { adapter },
// ...
})
class ClaudeAgent extends AgentContext { }
OpenRouter
Access 100+ models through OpenRouter using the OpenAI-compatible API:
import { ChatOpenAI } from '@langchain/openai';
import { LangChainAdapter } from '@frontmcp/sdk';
const adapter = new LangChainAdapter({
model: new ChatOpenAI({
model: 'anthropic/claude-3-opus', // Any OpenRouter model
openAIApiKey: process.env.OPENROUTER_API_KEY,
configuration: {
baseURL: 'https://openrouter.ai/api/v1',
},
}),
});
@Agent({
name: 'openrouter-agent',
llm: { adapter },
// ...
})
class OpenRouterAgent extends AgentContext { }
Custom Adapter
Implement AgentLlmAdapter for providers not covered by LangChain:
import { AgentLlmAdapter, AgentCompletion, AgentPrompt, AgentToolDefinition } from '@frontmcp/sdk';
const myAdapter: AgentLlmAdapter = {
async completion(prompt: AgentPrompt, tools?: AgentToolDefinition[]): Promise<AgentCompletion> {
// Call your LLM provider
const response = await myLlmClient.chat({
messages: prompt.messages,
system: prompt.system,
tools: tools?.map(t => ({ name: t.name, description: t.description })),
});
return {
content: response.text,
finishReason: response.hasToolCalls ? 'tool_calls' : 'stop',
toolCalls: response.toolCalls,
};
},
};
@Agent({
name: 'custom-agent',
llm: { adapter: myAdapter },
})
class CustomAgent extends AgentContext { }
Agent-Scoped Components
Agents can have their own isolated tools, resources, prompts, and providers:
@Agent({
name: 'isolated-agent',
description: 'Agent with its own scope',
// Agent-scoped tools (only this agent can use them)
tools: [PrivateTool],
// Agent-scoped resources
resources: [AgentConfig],
// Agent-scoped prompts
prompts: [AgentInstructions],
// Agent-scoped providers
providers: [DatabaseService],
llm: { ... },
})
class IsolatedAgent extends AgentContext { ... }
Swarm Configuration
Control agent visibility for multi-agent coordination:
@Agent({
name: 'orchestrator-agent',
description: 'Coordinates other agents',
swarm: {
canSeeOtherAgents: true, // Can this agent invoke others?
visibleAgents: ['worker-a', 'worker-b'], // Which agents can it see?
isVisible: true, // Can other agents see this one?
maxCallDepth: 3, // Max nested agent invocations
},
llm: { ... },
})
class OrchestratorAgent extends AgentContext {
async execute({ task }: { task: string }) {
// Can invoke visible agents
const result = await this.invokeAgent('worker-a', { data: task });
return { orchestrated: result };
}
}
Visibility Patterns
Orchestrator Pattern — A central agent coordinates specialized workers:
// Workers (can't see each other)
@Agent({ name: 'research-worker', swarm: { isVisible: true, canSeeOtherAgents: false }, ... })
class ResearchWorker extends AgentContext { ... }
@Agent({ name: 'writer-worker', swarm: { isVisible: true, canSeeOtherAgents: false }, ... })
class WriterWorker extends AgentContext { ... }
// Orchestrator (can see and invoke workers)
@Agent({
name: 'orchestrator',
swarm: { canSeeOtherAgents: true, visibleAgents: ['research-worker', 'writer-worker'] },
...
})
class Orchestrator extends AgentContext { ... }
Execution Configuration
Control agent execution behavior:
@Agent({
name: 'long-running-agent',
execution: {
maxIterations: 10, // Max tool call rounds (default: 10)
timeout: 120000, // Timeout in ms (default: 120000)
enableStreaming: false, // Stream responses
enableNotifications: true, // Send progress notifications
enableAutoProgress: false, // Auto progress during LLM calls (default: false)
inheritParentTools: true, // Include parent scope tools (default: true)
useToolFlow: true, // Execute tools through call-tool flow (default: true)
},
llm: { ... },
})
class LongRunningAgent extends AgentContext { ... }
By default, agents execute tools through the full call-tool flow, which includes:
- Plugin hooks (caching, rate limiting, audit logging)
- Authorization checks
- Tool middleware and transformations
For performance-critical scenarios, you can disable flow execution:
@Agent({
name: 'fast-agent',
execution: {
useToolFlow: false, // Direct execution, bypasses plugins
},
llm: { ... },
})
class FastAgent extends AgentContext { ... }
Setting useToolFlow: false bypasses all plugin hooks and middleware. Only use this when you need maximum performance and don’t require plugin features.
Overriding Behavior
Customize agent behavior by overriding methods in AgentContext:
@Agent({ name: 'custom-agent', llm: { ... } })
class CustomAgent extends AgentContext {
// Override LLM completion for custom logic
protected override async completion(prompt, tools, options) {
this.logger.info('Making LLM request...');
const result = await super.completion(prompt, tools, options);
this.logger.info(`LLM returned: ${result.finishReason}`);
return result;
}
// Override tool execution for logging/caching
protected override async executeTool(name: string, args: Record<string, unknown>) {
this.logger.info(`Executing tool: ${name}`);
return super.executeTool(name, args);
}
async execute(input: { query: string }) {
return { result: '...' };
}
}
Progress Notifications
Keep users informed during long operations using manual or automatic notifications.
Manual Notifications
Use this.notify() to send custom messages at specific points:
@Agent({ name: 'notifying-agent', llm: { ... } })
class NotifyingAgent extends AgentContext {
async execute({ task }: { task: string }) {
await this.notify('Starting task...', 'info');
// Step 1
await this.notify('Gathering data...', 'info');
const data = await this.gatherData();
// Step 2
await this.notify('Processing...', 'info');
const result = await this.process(data);
await this.notify('Complete!', 'info');
return { result };
}
}
Use this.progress() for progress bars when the client provides a progressToken:
@Agent({ name: 'progress-agent', llm: { ... } })
class ProgressAgent extends AgentContext {
async execute({ files }: { files: string[] }) {
for (let i = 0; i < files.length; i++) {
await this.progress(i + 1, files.length, `Processing ${files[i]}`);
await this.processFile(files[i]);
}
return { processed: files.length };
}
}
Automatic Progress (Opt-in)
Enable enableAutoProgress to automatically send progress notifications during the agent execution loop:
@Agent({
name: 'auto-progress-agent',
execution: {
enableAutoProgress: true, // Opt-in to automatic progress
},
llm: { ... },
})
class AutoProgressAgent extends AgentContext { }
When enabled, the agent automatically sends progress updates at these lifecycle points:
| Event | Progress % | Message Example |
|---|
| LLM call start | 0-80% | “Starting LLM call (iteration 1/10)“ |
| LLM response received | varies | ”LLM response received (500P + 200C tokens)“ |
| Tools identified | - | ”Identified 2 tool call(s): search, calculate” |
| Tool execution | varies | ”Executing tool 1/2: search” |
| Completion | 100% | “Agent completed” |
Auto progress requires both enableAutoProgress: true and enableNotifications: true (the default).
Progress notifications are only sent if the client includes a progressToken in the request’s _meta field.
Error Handling
Handle errors gracefully in agents:
import { AgentExecutionError, AgentTimeoutError } from '@frontmcp/sdk';
@Agent({ name: 'safe-agent', llm: { ... } })
class SafeAgent extends AgentContext {
async execute(input: { data: string }) {
try {
return await this.riskyOperation(input.data);
} catch (error) {
if (error instanceof ValidationError) {
// Return error response with fail()
this.fail(`Invalid input: ${error.message}`);
}
throw error;
}
}
}
Nested Agents
Agents can contain other agents, creating hierarchical structures:
@Agent({
name: 'parent-agent',
description: 'Parent with nested child agents',
// Nested agents — registered as tools within this agent
agents: [ChildAgentA, ChildAgentB],
// Parent's own tools
tools: [PlanningTool],
llm: { ... },
})
class ParentAgent extends AgentContext {
async execute({ task }: { task: string }) {
// LLM can call: invoke_child-agent-a, invoke_child-agent-b, planning-tool
return { result: '...' };
}
}