Why Use Caching?
Faster Responses
Return cached results instantly without re-executing expensive operations
Reduce Load
Minimize API calls, database queries, and computational overhead
Cost Savings
Lower infrastructure costs by reducing redundant processing
Better UX
Improve perceived performance with instant responses for repeated queries
Installation
How It Works
Cache entries are keyed using a deterministic hash of the tool’s validated input. The same input always produces
the same cache key.
Quick Start
Basic Setup (In-Memory)
Enable Caching on Tools
Caching is opt-in per tool. Add thecache field to your tool metadata:
Storage Options
In-Memory (Default)
Best for: Single-instance deployments, development, non-critical cachingRedis (Recommended for Production)
Best for: Multi-instance deployments, persistent caching, production environmentsRedis enables cache sharing across multiple server instances and persists cache across restarts.
Configuration Options
Plugin-Level Configuration
Configure default behavior when registering the plugin:Cache store backend to use
Default time-to-live in seconds (applies to all cached tools unless overridden)
Redis connection configuration (required when
type: 'redis')Existing ioredis client instance (required when
type: 'redis-client')Tool names or glob patterns to cache. Tools matching these patterns use
defaultTTL unless they have custom cache metadata.HTTP header name that clients can send to bypass cache for a specific request. When present with value
'true' or '1', cache read/write is skipped.Tool-Level Configuration
Configure caching behavior per tool in the@Tool or tool() metadata:
Enable caching for this tool
true- Use plugin defaultsobject- Custom configuration
Time-to-live in seconds for this tool’s cache entries (overrides plugin default)
When
true, reading from cache refreshes the TTL, keeping frequently accessed entries alive longerCaching Remote Tools
For remote MCP tools that you don’t control (connected via URL), use thetoolPatterns option to enable caching by name or pattern:
Pattern Syntax
| Pattern | Matches |
|---|---|
tool-name | Exact match only |
namespace:* | All tools in namespace |
prefix-* | Tools starting with prefix |
*-suffix | Tools ending with suffix |
api:*:list | Middle wildcard pattern |
Priority Rules
- Tool metadata takes precedence - If a tool has
cache: { ttl: 60 }metadata, that TTL is used - Pattern list uses
defaultTTL- Matched tools without metadata use the plugin’s default TTL - Union behavior - A tool is cached if it matches
toolPatternsOR hascachemetadata
The
toolPatterns option is especially useful for remote MCP servers where you can’t add cache: true to tool metadata directly.Bypassing Cache
Clients can bypass caching for specific requests by sending a header:Advanced Usage
Multi-Tenant Caching
Include tenant or user identifiers in your tool inputs to ensure cache isolation:Session-Scoped Caching
For user-specific data, include session or user identifiers:Time-Based Invalidation
Use short TTLs for frequently changing data:Best Practices
1. Only Cache Deterministic Tools
1. Only Cache Deterministic Tools
Cache tools whose outputs depend solely on their inputs. Don’t cache tools that:
- Return random data
- Depend on external time-sensitive state
- Have side effects (mutations, API calls that change state)
2. Choose Appropriate TTLs
2. Choose Appropriate TTLs
- Short TTLs (5-60s): Real-time data, frequently changing content - Medium TTLs (5-30min): User dashboards, reports, analytics - Long TTLs (hours-days): Static content, configuration, reference data
3. Use Redis for Production
3. Use Redis for Production
Redis provides: - Cache persistence across restarts - Sharing across multiple server instances - Better memory
management with eviction policies
4. Include Scoping in Inputs
4. Include Scoping in Inputs
Always include tenant IDs, user IDs, or other scoping fields in your tool inputs:
5. Use Sliding Windows for Hot Data
5. Use Sliding Windows for Hot Data
Enable
slideWindow for frequently accessed data to keep it cached longer:Cache Behavior Reference
| Behavior | Description |
|---|---|
| Key Derivation | Deterministic hash from validated input. Same input = same cache key |
| Cache Hits | Bypasses tool execution entirely, returns cached result instantly |
| Default TTL | 86400 seconds (1 day) if not specified |
| Sliding Window | Extends TTL on reads when enabled |
| Store Choice | Memory is node-local; Redis enables multi-instance sharing |
| Invalidation | Automatic after TTL expires, or manually by restarting (memory) |
Troubleshooting
No cache hits occurring
No cache hits occurring
Possible causes:
- Tool missing
cache: truein metadata - Cache store offline or misconfigured
- Input varies slightly (whitespace, order of fields)
- Verify
cachefield is set in tool metadata - Check Redis connection if using Redis backend
- Ensure input structure is consistent
Stale data being returned
Stale data being returned
Possible causes:
- TTL too long for data freshness requirements
- Data changed but cache not invalidated
- Reduce TTL for the tool
- Consider input-based cache busting (include timestamp or version in input)
- Restart server to clear memory cache (or flush Redis)
Cache not shared across instances
Cache not shared across instances
Need to invalidate specific cache entries
Need to invalidate specific cache entries
Solution:
- Currently, manual invalidation requires custom implementation
- For memory: restart the server
- For Redis: use Redis CLI to delete keys manually
- Consider shorter TTLs or input-based versioning instead
Complete Example
Links & Resources
Source Code
View the cache plugin source code
Demo Application
See caching in action with real examples
Plugin Guide
Learn more about FrontMCP plugins
Redis Documentation
Official Redis documentation