Skip to main content
FrontMCP agents are autonomous units that can invoke tools using LLM reasoning. This guide shows you how to create agents that combine LLM capabilities with your MCP tools.
Prerequisites:
  • A working FrontMCP project with tools (see Your First Tool)
  • An API key for an LLM provider (OpenAI, Anthropic, etc.)

What You’ll Build

A weather summary agent that:
  • Accepts a location from the user
  • Uses a weather tool to fetch current conditions
  • Returns a human-friendly summary generated by an LLM

Understanding Agents

Agents in FrontMCP are different from tools:
AspectToolAgent
ExecutionDeterministic codeLLM-driven reasoning
Tool AccessN/ACan call other tools
ResponseDirect resultGenerated content
Use CaseData fetching, CRUDSummarization, planning, multi-step tasks

Step 1: Create a Tool for the Agent

First, create a tool that the agent can use:
// tools/get-weather.tool.ts
import { Tool, ToolContext } from '@frontmcp/sdk';
import { z } from 'zod';

const inputSchema = {
  location: z.string().describe('City name or location'),
  units: z.enum(['celsius', 'fahrenheit']).optional(),
};

const outputSchema = z.object({
  location: z.string(),
  temperature: z.number(),
  units: z.enum(['celsius', 'fahrenheit']),
  conditions: z.string(),
  humidity: z.number(),
  windSpeed: z.number(),
});

@Tool({
  name: 'get_weather',
  description: 'Get current weather for a location',
  inputSchema,
  outputSchema,
  codecall: {
    visibleInListTools: true,  // Make visible to agents
  },
})
export default class GetWeatherTool extends ToolContext<typeof inputSchema, typeof outputSchema> {
  async execute(input: { location: string; units?: 'celsius' | 'fahrenheit' }) {
    // In production, call a real weather API
    return {
      location: input.location,
      temperature: 22,
      units: input.units || 'celsius',
      conditions: 'sunny',
      humidity: 55,
      windSpeed: 10,
    };
  }
}

Step 2: Create the Agent

// agents/summary.agent.ts
import { Agent, AgentContext } from '@frontmcp/sdk';
import { z } from 'zod';
import GetWeatherTool from '../tools/get-weather.tool';

@Agent({
  name: 'weather-summary',
  systemInstructions: `You are a helpful weather assistant. When asked about weather:
1. Use the get_weather tool to fetch current conditions
2. Provide a friendly, conversational summary of the weather
3. Include practical advice (e.g., "bring an umbrella" for rain)`,

  inputSchema: {
    location: z.string().describe('City name or location'),
    units: z.enum(['celsius', 'fahrenheit']).optional().describe('Temperature units'),
  },

  outputSchema: {
    summary: z.string().describe('Weather summary for the given location'),
  },

  // Configure the LLM provider
  llm: {
    provider: 'openai',
    model: 'gpt-4',
    apiKey: {
      env: 'OPENAI_API_KEY',  // Read from environment variable
    },
  },

  // Make agent visible in tool listings
  codecall: {
    visibleInListTools: true,
  },

  // Tools the agent can use
  tools: [GetWeatherTool],
})
export default class WeatherSummaryAgent extends AgentContext {}

Step 3: Register the Agent

Add the agent to your app:
// apps/weather/index.ts
import { App } from '@frontmcp/sdk';
import GetWeatherTool from './tools/get-weather.tool';
import WeatherSummaryAgent from './agents/summary.agent';

@App({
  id: 'weather',
  name: 'Weather App',
  description: 'Weather information and summaries',
  tools: [GetWeatherTool],
  agents: [WeatherSummaryAgent],
})
export default class WeatherApp {}

Agent Configuration Options

LLM Provider Configuration

llm: {
  // Provider: 'openai' | 'anthropic' | 'azure' | etc.
  provider: 'openai',

  // Model name
  model: 'gpt-4',

  // API key (multiple options)
  apiKey: {
    env: 'OPENAI_API_KEY',  // From environment
  },
  // or
  apiKey: {
    value: 'sk-...',  // Direct value (not recommended)
  },

  // Optional settings
  temperature: 0.7,
  maxTokens: 1000,
}

Tool Visibility

Control how the agent appears to clients:
codecall: {
  // Include in tools/list responses
  visibleInListTools: true,

  // Custom name for tool listing (default: agent name)
  toolName: 'summarize_weather',
}

System Instructions

Guide the agent’s behavior:
systemInstructions: `You are a helpful assistant that...
- Always use available tools before responding
- Provide concise, actionable responses
- If you can't find information, say so clearly`,

How Agents Work

1

Request received

User calls the agent with input matching inputSchema.
2

LLM processes request

The agent sends the input to the configured LLM along with:
  • System instructions
  • Available tools (as function definitions)
  • User input
3

Tool calls (if needed)

If the LLM decides to use a tool, the agent:
  1. Executes the tool with provided arguments
  2. Returns the result to the LLM
  3. LLM continues reasoning
4

Response generated

The LLM generates a final response matching outputSchema.

Example: Multi-Tool Agent

Create an agent that uses multiple tools:
import { Agent, AgentContext } from '@frontmcp/sdk';
import { z } from 'zod';
import GetWeatherTool from '../tools/get-weather.tool';
import GetForecastTool from '../tools/get-forecast.tool';
import GetAlertsTool from '../tools/get-alerts.tool';

@Agent({
  name: 'travel-advisor',
  systemInstructions: `You are a travel weather advisor. Help users plan their trips by:
1. Checking current weather at the destination
2. Looking at the forecast for their travel dates
3. Alerting them to any weather warnings
4. Providing packing suggestions based on conditions`,

  inputSchema: {
    destination: z.string().describe('Travel destination'),
    departureDate: z.string().describe('Departure date (YYYY-MM-DD)'),
    returnDate: z.string().describe('Return date (YYYY-MM-DD)'),
  },

  outputSchema: {
    recommendation: z.string().describe('Travel weather recommendation'),
    packingList: z.array(z.string()).describe('Suggested items to pack'),
    alerts: z.array(z.string()).optional().describe('Any weather alerts'),
  },

  llm: {
    provider: 'openai',
    model: 'gpt-4',
    apiKey: { env: 'OPENAI_API_KEY' },
  },

  // Multiple tools available to the agent
  tools: [GetWeatherTool, GetForecastTool, GetAlertsTool],
})
export default class TravelAdvisorAgent extends AgentContext {}

Example: Planning Agent

Create an agent that breaks down complex tasks:
@Agent({
  name: 'expense-planner',
  systemInstructions: `You are an expense planning assistant. Help users:
1. Categorize their expenses using available categories
2. Check against budget limits
3. Suggest ways to optimize spending
4. Create expense reports

Always use the tools to get accurate data before making recommendations.`,

  inputSchema: {
    query: z.string().describe('User question about expenses'),
    timeframe: z.enum(['week', 'month', 'quarter', 'year']).optional(),
  },

  outputSchema: {
    response: z.string().describe('Response to the user query'),
    suggestedActions: z.array(z.string()).optional(),
  },

  llm: {
    provider: 'anthropic',
    model: 'claude-3-sonnet-20240229',
    apiKey: { env: 'ANTHROPIC_API_KEY' },
  },

  tools: [
    ListExpensesTool,
    GetCategoriesResource,
    GetBudgetTool,
    CreateReportTool,
  ],
})
export default class ExpensePlannerAgent extends AgentContext {}

Using Agents as Tools

Agents can be used by other agents, creating hierarchical workflows:
@Agent({
  name: 'trip-planner',
  tools: [
    WeatherSummaryAgent,  // Another agent as a tool
    FlightSearchTool,
    HotelSearchTool,
  ],
  // ...
})
export default class TripPlannerAgent extends AgentContext {}

Testing Agents

import { test, expect } from '@frontmcp/testing';

test.use({ server: './src/main.ts' });

test('weather summary agent returns a summary', async ({ mcp }) => {
  const result = await mcp.tools.call('weather:weather-summary', {
    location: 'San Francisco',
    units: 'celsius',
  });

  expect(result).toBeSuccessful();
  expect(result.content[0].text).toBeDefined();
});

Best Practices

Good instructions lead to better agent behavior:
// Good - specific and actionable
systemInstructions: `You are a customer support agent.
1. Always greet the customer
2. Use the search_orders tool to find relevant orders
3. Never share sensitive data like full credit card numbers
4. If you can't help, escalate to a human`,

// Bad - too vague
systemInstructions: 'Help customers with their questions'
Only give agents tools they need:
// Good - focused set of tools
tools: [GetOrderTool, TrackShipmentTool, RefundTool],

// Bad - too many unrelated tools
tools: [...allTools],  // Agent might get confused
Help LLMs produce structured output:
outputSchema: {
  status: z.enum(['success', 'partial', 'failed']),
  summary: z.string().max(500),
  actionsTaken: z.array(z.string()),
  followUp: z.string().optional(),
}
Include error handling in system instructions:
systemInstructions: `...
If a tool call fails:
1. Try an alternative approach if available
2. Inform the user what went wrong
3. Suggest next steps they can take`
Track LLM API usage:
llm: {
  // Use cheaper models for simple tasks
  model: process.env.AGENT_MODEL || 'gpt-3.5-turbo',
  maxTokens: 500,  // Limit response length
}

Next Steps

Agent Reference

Full @Agent decorator documentation

CodeCall Plugin

Advanced agent orchestration

Tool UI

Rich displays for agent outputs

Testing

Test agents and their tool usage