Skip to main content
The Scoring Gate adds semantic security analysis that detects attack patterns beyond what static AST validation can catch. It analyzes code intent and behavior patterns to identify potential threats.

What It Detects

  • Data exfiltration - List followed by send, or query followed by export sequences
  • Excessive access - High limits, wildcard queries
  • Fan-out attacks - Tool calls inside loops
  • Sensitive data access - Passwords, tokens, PII fields

Basic Configuration

import { Enclave } from '@enclave-vm/core';

// Rule-based scorer (~1ms latency, zero dependencies)
const enclave = new Enclave({
  scoringGate: {
    scorer: 'rule-based',
    blockThreshold: 70,     // Score >= 70 blocks execution
    warnThreshold: 40,      // Score >= 40 logs warning
    failOpen: true,         // Allow if scoring fails (default)
  },
});

Scorer Types

TypeLatencyDependenciesDetection
disabled0msNoneNone
rule-based~1msNoneGood
local-llm~5-10msModel downloadBetter
external-api~100msNetworkBest

Rule-Based Scorer

Fast, zero-dependency scoring using predefined rules:
const enclave = new Enclave({
  scoringGate: {
    scorer: 'rule-based',
    blockThreshold: 70,
    warnThreshold: 40,
  },
});

External API Scorer

Best detection using an external scoring service:
const enclave = new Enclave({
  scoringGate: {
    scorer: 'external-api',
    externalApi: {
      endpoint: 'https://api.example.com/score',
      apiKey: process.env.SCORING_API_KEY,
      timeoutMs: 5000,
      retries: 1,
    },
    blockThreshold: 70,
    warnThreshold: 40,
  },
});

Local LLM Scorer

Balance between speed and detection using a local model:
const enclave = new Enclave({
  scoringGate: {
    scorer: 'local-llm',
    localLlm: {
      modelPath: './models/security-scorer.onnx',
    },
    blockThreshold: 70,
    warnThreshold: 40,
  },
});

Similarity Mode with VectoriaDB

For pattern-matching against known malicious code patterns, use similarity mode with VectoriaDB:
import { Enclave } from '@enclave-vm/core';

const enclave = new Enclave({
  scoringGate: {
    scorer: 'local-llm',
    localLlm: {
      modelId: 'Xenova/all-MiniLM-L6-v2',
      mode: 'similarity',
      vectoriaConfig: {
        threshold: 0.85,    // Similarity threshold (0-1)
        topK: 5,            // Number of results to consider
        modelName: 'Xenova/all-MiniLM-L6-v2', // Optional: override embedding model
      },
    },
    blockThreshold: 70,
    warnThreshold: 40,
  },
});
VectoriaDB Configuration Options:
OptionTypeDefaultDescription
thresholdnumber0.85Similarity threshold (0-1) for considering a match
topKnumber5Maximum number of similar patterns to return
modelNamestringInherits from localLlm.modelIdEmbedding model for similarity computation
Similarity mode requires the optional vectoriadb peer dependency:
npm install vectoriadb
Similarity mode can operate without the Hugging Face transformers pipeline - it will use VectoriaDB for similarity search and fall back to heuristics when needed.

Detection Rules

The rule-based scorer evaluates these patterns:
RuleScoreDescription
SENSITIVE_FIELD35Queries password/token/secret fields
EXCESSIVE_LIMIT25limit > 10,000
WILDCARD_QUERY20query=”*” or filter=
LOOP_TOOL_CALL25callTool inside for/for-of loop
EXFIL_PATTERN50list followed by send or query followed by export sequence
EXTREME_VALUE30Numeric arg > 1,000,000
DYNAMIC_TOOL20Variable tool name (not static string)
BULK_OPERATION15Tool name contains bulk/batch/all
Scores are additive - a script triggering multiple rules accumulates points.

Thresholds

Configure how scores translate to actions:
const enclave = new Enclave({
  scoringGate: {
    scorer: 'rule-based',

    // Block execution if score >= 70
    blockThreshold: 70,

    // Log warning if score >= 40
    warnThreshold: 40,

    // If scoring fails (e.g., API timeout):
    // true = allow execution (fail open)
    // false = block execution (fail closed)
    failOpen: true,
  },
});

Custom Analyzer

Add custom analysis logic:
interface CustomAnalyzer {
  name: string;
  analyze: (code: string, ast: Node) => AnalysisResult;
}

const enclave = new Enclave({
  scoringGate: {
    scorer: 'rule-based',
    customAnalyzers: [
      {
        name: 'company-policy',
        analyze: (code, ast) => {
          let score = 0;
          const signals: string[] = [];

          // Check for company-specific patterns
          if (code.includes('internal:')) {
            score += 30;
            signals.push('INTERNAL_TOOL_ACCESS');
          }

          return { score, signals };
        },
      },
    ],
  },
});

Feature Extraction

The scorer extracts these features for analysis:
  • Tool names - All callTool() targets
  • Arguments - Numeric values, field names, patterns
  • Control flow - Loops containing tool calls
  • Data flow - Variables passed between tool calls
  • Sequences - Order of operations

Handling Scoring Results

const result = await enclave.run(code);

if (!result.success) {
  if (result.error?.code === 'SCORING_BLOCKED') {
    console.log('Blocked by scoring gate');
    console.log('Score:', result.error.data.score);
    console.log('Signals:', result.error.data.signals);
  }
}

Logging and Monitoring

const enclave = new Enclave({
  scoringGate: {
    scorer: 'rule-based',
    blockThreshold: 70,
    warnThreshold: 40,

    // Callback for all scoring results
    onScore: (result) => {
      metrics.recordScore(result.score);

      if (result.score >= 40) {
        logger.warn('Elevated risk score', {
          score: result.score,
          signals: result.signals,
        });
      }
    },
  },
});

Best Practices

  1. Start with warnings - Use warnThreshold to monitor before blocking
  2. Tune thresholds - Adjust based on your false positive rate
  3. Use fail-open cautiously - Only in non-critical paths
  4. Monitor signals - Track which rules trigger most often
  5. Layer with other defenses - Scoring complements AST validation

Breaking Changes

v2.x: VectoriaConfigForScoring API Changes

Removed: indexPath option The indexPath option has been removed from VectoriaConfigForScoring. This option was intended to load pre-built malicious pattern indexes, but VectoriaDB v2.x handles persistence differently using storage adapters. Migration:
// Before (v1.x) - No longer supported
const config = {
  vectoriaConfig: {
    indexPath: '/path/to/malicious-patterns.index', // REMOVED
    threshold: 0.85,
  },
};

// After (v2.x)
const config = {
  vectoriaConfig: {
    threshold: 0.85,
    topK: 5,
    modelName: 'Xenova/all-MiniLM-L6-v2',
  },
};
If you were using indexPath to load pre-built indexes, you’ll need to handle persistence externally using VectoriaDB’s storage adapter APIs (saveToStorage(), MemoryStorageAdapter, FileStorageAdapter, or RedisStorageAdapter). New options in v2.x:
  • topK - Control how many similar patterns to consider (default: 5)
  • modelName - Override the embedding model (defaults to localLlm.modelId)