Skip to main content
Workflows are DAG-based multi-step pipelines that compose Jobs with dependencies, conditional execution, and parallel step processing. They provide orchestration for complex operations that span multiple jobs.
Workflows execute jobs in dependency order with automatic cycle detection, parallel batching, and per-step error handling.

Why Workflows?

Workflows handle orchestration that individual jobs cannot:
AspectJobWorkflowSkill
PurposeSingle unit of workMulti-step pipelineAI instruction guide
CompositionStandaloneChains multiple jobsReferences tools
DependenciesNoneDAG with dependsOnNone
ParallelismSingle executionBounded concurrent stepsNone
TriggersOn-demandManual, webhook, or eventOn-demand
Workflows are ideal for:
  • Multi-step data pipelines — extract, transform, load sequences
  • Approval chains — sequential steps with conditional logic
  • Parallel processing — fan-out/fan-in patterns with bounded concurrency
  • Event-driven automation — webhook-triggered multi-job flows

Creating Workflows

Workflows are defined declaratively using the @Workflow decorator with a steps array:
import { Workflow } from '@frontmcp/sdk';

@Workflow({
  name: 'greet-and-analyze',
  description: 'Greet a user then analyze the greeting',
  trigger: 'manual',
  steps: [
    {
      id: 'greet',
      jobName: 'greet',
      input: { name: 'World', formal: false },
    },
    {
      id: 'analyze',
      jobName: 'analyze-text',
      dependsOn: ['greet'],
      input: (steps) => ({
        text: steps.get('greet').outputs.message,
        language: 'en',
      }),
    },
  ],
})
class GreetAndAnalyzeWorkflow {}

Function Style

import { workflow } from '@frontmcp/sdk';

const GreetAndAnalyzeWorkflow = workflow({
  name: 'greet-and-analyze',
  description: 'Greet a user then analyze the greeting',
  steps: [
    {
      id: 'greet',
      jobName: 'greet',
      input: { name: 'World' },
    },
    {
      id: 'analyze',
      jobName: 'analyze-text',
      dependsOn: ['greet'],
      input: (steps) => ({
        text: steps.get('greet').outputs.message,
      }),
    },
  ],
});

Registering Workflows

Add workflows to your app via the workflows array:
import { App } from '@frontmcp/sdk';

@App({
  id: 'text-processing',
  name: 'Text Processing',
  jobs: [GreetJob, AnalyzeTextJob],
  workflows: [GreetAndAnalyzeWorkflow],
})
class TextProcessingApp {}
Jobs referenced by workflow steps must be registered in the same app or available in the scope’s job registry.

Workflow Steps

Each step defines a job to execute and how it connects to other steps:
interface WorkflowStep {
  id: string;           // Unique step identifier
  jobName: string;      // Reference to a registered job
  dependsOn?: string[]; // Step IDs that must complete first
  input?: Record<string, unknown> | ((steps) => Record<string, unknown>);
  condition?: (steps) => boolean;
  continueOnError?: boolean;
  timeout?: number;     // Per-step timeout override (ms)
  retry?: JobRetryConfig; // Per-step retry override
}
FieldTypeDefaultDescription
idstringRequired. Unique step identifier
jobNamestringRequired. Name of the job to execute
dependsOnstring[][]Step IDs that must complete before this step runs
inputobject | functionWorkflow inputStatic input or dynamic callback from previous step outputs
conditionfunctionSkip step if returns false
continueOnErrorbooleanfalseContinue workflow if this step fails
timeoutnumberJob defaultPer-step timeout override in ms
retryJobRetryConfigJob defaultPer-step retry override

Step Dependencies

Steps declare dependencies using dependsOn. The engine validates the DAG before execution:
  • Duplicate ID detection — no two steps can share an ID
  • Missing reference detectiondependsOn must reference existing step IDs
  • Cycle detection — DFS-based cycle detection prevents infinite loops
steps: [
  { id: 'fetch', jobName: 'fetch-data' },
  { id: 'validate', jobName: 'validate-data', dependsOn: ['fetch'] },
  { id: 'transform', jobName: 'transform-data', dependsOn: ['validate'] },
  { id: 'load', jobName: 'load-data', dependsOn: ['transform'] },
]

Dynamic Input

Steps can compute their input dynamically based on previous step outputs:
steps: [
  {
    id: 'fetch-users',
    jobName: 'fetch-users',
    input: { limit: 100 },
  },
  {
    id: 'enrich',
    jobName: 'enrich-users',
    dependsOn: ['fetch-users'],
    input: (steps) => ({
      users: steps.get('fetch-users').outputs.users,
      source: 'crm',
    }),
  },
]
The steps context provides a get(stepId) method that returns WorkflowStepResult:
interface WorkflowStepResult {
  outputs: Record<string, unknown>;
  state: 'completed' | 'failed' | 'skipped';
}

Conditional Steps

Steps can be conditionally skipped based on previous step results:
steps: [
  {
    id: 'check-eligibility',
    jobName: 'check-eligibility',
  },
  {
    id: 'send-notification',
    jobName: 'send-notification',
    dependsOn: ['check-eligibility'],
    condition: (steps) =>
      steps.get('check-eligibility').outputs.eligible === true,
    input: (steps) => ({
      userId: steps.get('check-eligibility').outputs.userId,
    }),
  },
]
When a condition returns false, the step is marked as skipped and downstream steps that depend on it will still execute (skipped steps are treated as completed for dependency resolution).

Parallel Execution

Steps with no mutual dependencies execute in parallel, bounded by maxConcurrency:
@Workflow({
  name: 'parallel-analysis',
  maxConcurrency: 3, // Run up to 3 steps concurrently
  steps: [
    { id: 'fetch', jobName: 'fetch-data' },
    // These three run in parallel after 'fetch' completes
    { id: 'analyze-sentiment', jobName: 'sentiment', dependsOn: ['fetch'] },
    { id: 'analyze-entities', jobName: 'entities', dependsOn: ['fetch'] },
    { id: 'analyze-topics', jobName: 'topics', dependsOn: ['fetch'] },
    // This runs after all three analyses complete
    {
      id: 'aggregate',
      jobName: 'aggregate-results',
      dependsOn: ['analyze-sentiment', 'analyze-entities', 'analyze-topics'],
    },
  ],
})

Configuration

FieldTypeDefaultDescription
namestringRequired. Unique workflow identifier
descriptionstringHuman-readable description
stepsWorkflowStep[]Required. Ordered step definitions (min 1)
trigger'manual' | 'webhook' | 'event''manual'How the workflow is triggered
webhookWorkflowWebhookConfigWebhook configuration (when trigger is 'webhook')
timeoutnumber600000Maximum total execution time in ms (10 min)
maxConcurrencynumber5Maximum parallel step concurrency
idstringnameStable identifier for tracking
tagsstring[]Categorization tags
labelsRecord<string, string>Fine-grained key-value labels
hideFromDiscoverybooleanfalseHide from list-workflows
permissionsJobPermission[]RBAC permission rules
inputSchemaZodShapeWorkflow-level input schema
outputSchemaZodShapeWorkflow-level output schema

Webhook Configuration

interface WorkflowWebhookConfig {
  path?: string;          // Custom path (default: /workflows/webhook/{name})
  secret?: string;        // Webhook secret for validation
  methods?: ('GET' | 'POST')[]; // Allowed HTTP methods (default: ['POST'])
}

Triggers

TriggerDescription
manualTriggered via execute-workflow tool or DirectClient
webhookTriggered by HTTP webhook with optional secret validation
eventTriggered by internal events

Execution Flow


Error Handling

When a step fails:
  1. Default behavior — the step is marked as failed, and downstream dependents are skipped
  2. With continueOnError: true — the step is marked as failed but treated as completed for dependency resolution, allowing downstream steps to proceed
steps: [
  {
    id: 'optional-enrichment',
    jobName: 'enrich-data',
    continueOnError: true, // Workflow continues even if this fails
  },
  {
    id: 'save',
    jobName: 'save-results',
    dependsOn: ['optional-enrichment'],
    // This will run even if enrichment failed
  },
]

MCP Tools

When workflows are enabled, the following MCP tools are automatically registered:
ToolDescription
list-workflowsList registered workflows with optional filtering
execute-workflowExecute a workflow (inline or background)
get-workflow-statusGet execution status with per-step results
register-workflowRegister a dynamic workflow at runtime
remove-workflowRemove a dynamic workflow

Best Practices

Do:
  • Keep steps focused on a single job each
  • Use dependsOn to make data flow explicit
  • Set continueOnError: true for non-critical steps
  • Use dynamic input functions to pass data between steps
  • Set appropriate maxConcurrency based on resource constraints
Don’t:
  • Create deeply nested dependency chains when parallel execution is possible
  • Skip DAG validation by referencing non-existent step IDs
  • Set maxConcurrency too high for resource-intensive jobs
  • Use workflows for single-step operations (use jobs directly)

Next Steps

Jobs

Define the jobs that workflows orchestrate

@Workflow

Decorator reference

WorkflowRegistry

Registry API reference

DirectClient

Programmatic workflow execution