Skip to main content
DeployStack Satellite is an edge worker service that manages MCP servers with dual deployment support: HTTP proxy for external endpoints and stdio subprocess for local MCP servers. This document covers both the current MCP transport implementation and the planned full architecture.

Technical Overview

Edge Worker Pattern

Satellites operate as edge workers similar to GitHub Actions runners, providing:
  • MCP Transport Protocols: SSE, Streamable HTTP, Direct HTTP communication
  • Dual MCP Server Management: HTTP proxy + stdio subprocess support (ready for implementation)
  • Team Isolation: nsjail sandboxing with built-in resource limits (ready for implementation)
  • OAuth 2.1 Resource Server: Token introspection with Backend
  • Backend Polling Communication: Outbound-only, firewall-friendly
  • Real-Time Event System: Immediate satellite → backend event emission with automatic batching
  • Process Lifecycle Management: Spawn, monitor, terminate MCP servers (ready for implementation)
  • Background Jobs System: Cron-like recurring tasks with automatic error handling

Current Implementation Architecture

MCP SDK Transport Layer

The satellite uses the official @modelcontextprotocol/sdk for all MCP client communication:
┌─────────────────────────────────────────────────────────────────────────────────┐
│                        Official MCP SDK Implementation                         │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                        MCP SDK Server                                   │   │
│  │                                                                         │   │
│  │  • StreamableHTTPServerTransport    • Standard JSON-RPC handling       │   │
│  │  • Automatic session management     • Built-in error responses         │   │
│  │  • Protocol 2025-03-26 compliance   • SSE streaming support            │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                     MCP Client Integration                              │   │
│  │                                                                         │   │
│  │  • StreamableHTTPClientTransport    • External server discovery        │   │
│  │  • Automatic connection cleanup     • Tool discovery caching           │   │
│  │  • Standard MCP method support      • Process communication            │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                    Foundation Infrastructure                            │   │
│  │                                                                         │   │
│  │  • Fastify HTTP Server with JSON Schema validation                     │   │
│  │  • Pino structured logging with operation tracking                     │   │
│  │  • TypeScript + Webpack build system                                   │   │
│  │  • Environment configuration with .env support                        │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────────┘

MCP Transport Endpoints

Active Endpoints:
  • GET /mcp - Establish SSE stream via MCP SDK
  • POST /mcp - Send JSON-RPC messages via MCP SDK
  • DELETE /mcp - Session termination via MCP SDK
Transport Protocol Support:
MCP Client                    Satellite (MCP SDK)
    │                            │
    │──── POST /mcp ────────────▶│  (Initialize connection)
    │                            │
    │◀─── Session headers ──────│  (Session established)
    │                            │
    │──── POST /mcp ────────────▶│  (JSON-RPC tools/list)
    │                            │
    │◀─── 2 meta-tools ─────────│  (Hierarchical router)

Core SDK Components

MCP Server Wrapper:
  • Official SDK Server integration with Fastify
  • Standard MCP protocol method handlers
  • Automatic session and transport management
  • Integration with existing tool discovery and process management
Client Communication:
  • StreamableHTTPClientTransport for external server communication
  • Automatic connection establishment and cleanup
  • Standard MCP method execution (listTools, callTool)
  • Built-in error handling and retry logic

MCP Protocol Implementation

Supported MCP Methods:
  • initialize - MCP session initialization (SDK automatic)
  • notifications/initialized - Client initialization complete
  • tools/list - List available meta-tools (hierarchical router: 2 tools only)
  • tools/call - Execute meta-tools or route to actual MCP servers
  • resources/list - List available resources (returns empty array)
  • resources/templates/list - List resource templates (returns empty array)
  • prompts/list - List available prompts (returns empty array)
Hierarchical Router: The satellite exposes only 2 meta-tools to MCP clients (discover_mcp_tools and execute_mcp_tool) instead of all available tools. This solves the MCP context window consumption problem by reducing token usage by 95%+. See Hierarchical Router Implementation for details.
For detailed information about internal tool discovery and caching, see Tool Discovery Implementation. Error Handling:
  • Standard JSON-RPC 2.0 compliant error responses via SDK
  • Automatic HTTP status code mapping
  • Structured error logging with operation tracking
  • Built-in session validation and error reporting

Planned Full Architecture

Three-Tier System Design

┌─────────────────────────────────────────────────────────────────────────────────┐
│                        MCP Client Layer                                        │
│                     (VS Code, Claude, etc.)                                    │
│                                                                                 │
│  Connects via: SSE, Streamable HTTP, Direct HTTP Tools                        │
└─────────────────────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────────────────────┐
│                      Satellite Layer                                           │
│                   (Edge Processing)                                            │
│                                                                                 │
│  ┌─────────────────────────────────────────┐                                   │
│  │        Global Satellite                 │                                   │
│  │  (Operated by DeployStack Team)         │                                   │
│  │      (Serves All Teams)                 │                                   │
│  └─────────────────────────────────────────┘                                   │
│                                                                                 │
│  ┌─────────────────────────────────────────┐                                   │
│  │        Team Satellite                   │                                   │
│  │   (Customer-Deployed)                   │                                   │
│  │   (Serves Single Team)                  │                                   │
│  └─────────────────────────────────────────┘                                   │
└─────────────────────────────────────────────────────────────────────────────────┘


┌─────────────────────────────────────────────────────────────────────────────────┐
│                       Backend Layer                                            │
│                  (Central Management)                                          │
│                                                                                 │
│  ┌─────────────────────────────────────────────────────────────────────────┐   │
│  │                    DeployStack Backend                                  │   │
│  │                  (cloud.deploystack.io)                                │   │
│  │                                                                         │   │
│  │  • Command orchestration    • Configuration management                 │   │
│  │  • Status monitoring        • Team & role management                   │   │
│  │  • Usage analytics          • Security & compliance                    │   │
│  └─────────────────────────────────────────────────────────────────────────┘   │
└─────────────────────────────────────────────────────────────────────────────────┘

Satellite Internal Architecture (Planned)

Each satellite instance will contain five core components:
┌─────────────────────────────────────────────────────────────────┐
│                    Satellite Instance                           │
│                                                                 │
│  ┌─────────────────┐    ┌─────────────────┐                   │
│  │  HTTP Proxy     │    │  MCP Server     │                   │
│  │    Router       │    │    Manager      │                   │
│  │                 │    │                 │                   │
│  │ • Team-aware    │    │ • Process       │                   │
│  │ • OAuth 2.1     │    │   Lifecycle     │                   │
│  │ • Load Balance  │    │ • stdio Comm    │                   │
│  └─────────────────┘    └─────────────────┘                   │
│                                                                 │
│  ┌─────────────────┐    ┌─────────────────┐                   │
│  │  Team Resource  │    │   Backend       │                   │
│  │    Manager      │    │ Communicator    │                   │
│  │                 │    │                 │                   │
│  │ • Namespaces    │    │ • HTTP Polling  │                   │
│  │ • cgroups       │    │ • Config Sync   │                   │
│  │ • Isolation     │    │ • Status Report │                   │
│  └─────────────────┘    └─────────────────┘                   │
│                                                                 │
│  ┌─────────────────────────────────────────┐                   │
│  │        Communication Manager            │                   │
│  │                                         │                   │
│  │ • JSON-RPC stdio    • HTTP Proxy       │                   │
│  │ • Process IPC       • Client Routing   │                   │
│  └─────────────────────────────────────────┘                   │
└─────────────────────────────────────────────────────────────────┘

Deployment Models

Global Satellites

Operated by DeployStack Team:
  • Infrastructure: Cloud-hosted (AWS, GCP, Azure)
  • Scope: Serve all teams with resource isolation
  • Scaling: Auto-scaling based on demand
  • Management: Centralized by DeployStack operations
  • Use Case: Teams wanting shared infrastructure
Architecture Benefits:
  • Zero Installation: URL-based configuration
  • Instant Availability: No setup or deployment required
  • Automatic Updates: Invisible to users
  • Global Scale: Multi-region deployment

Team Satellites

Customer-Deployed:
  • Infrastructure: Customer’s corporate networks
  • Scope: Single team exclusive access
  • Scaling: Customer-controlled resources
  • Management: Team administrators
  • Use Case: Internal resource access, compliance requirements
Architecture Benefits:
  • Internal Access: Company databases, APIs, file systems
  • Data Sovereignty: Data never leaves corporate network
  • Complete Control: Customer owns infrastructure
  • Compliance Ready: Meets enterprise security requirements

Communication Patterns

Client-to-Satellite Communication

Multiple Transport Protocols:
  • SSE (Server-Sent Events): Real-time streaming with session management
  • Streamable HTTP: Chunked responses with optional sessions
  • Direct HTTP Tools: Standard REST API calls
Current Implementation:
MCP Client                    Satellite
    │                            │
    │──── GET /sse ─────────────▶│  (Establish SSE connection)
    │                            │
    │◀─── event: endpoint ──────│  (Session URL + heartbeat)
    │                            │
    │──── POST /message ────────▶│  (JSON-RPC via session)
    │                            │
    │◀─── Response via SSE ─────│  (Stream JSON-RPC response)
Session Management:
  • Session ID: 32-byte cryptographically secure identifier
  • Timeout: 30-minute automatic cleanup
  • Activity Tracking: Updated on each message
  • State Management: Client info and initialization status

Satellite-to-Backend Communication

HTTP Polling Pattern:
Satellite                    Backend
   │                           │
   │──── GET /api/satellites/{id}/commands ──▶│  (Poll for commands)
   │                           │
   │◀─── Commands Response ────│  (Configuration, tasks)
   │                           │
   │──── POST /api/satellites/{id}/heartbeat ─▶│  (Report status, metrics)
   │                           │
   │◀─── Acknowledgment ───────│  (Confirm receipt)
Communication Features:
  • Outbound Only: Firewall-friendly
  • Priority-Based Polling: Four modes (immediate/high/normal/slow) with automatic transitions
  • Command Queue: Priority-based task processing with expiration and correlation IDs
  • Status Reporting: Real-time health and metrics every 30 seconds
  • Configuration Sync: Dynamic MCP server configuration updates
  • Error Recovery: Exponential backoff with maximum 5-minute intervals
  • 3-Second Response Time: Immediate priority commands enable near real-time responses
For complete implementation details, see Backend Polling Implementation.

Real-Time Event System

The satellite emits typed events for status changes, logs, and tool metadata. Events enable real-time monitoring without polling. Difference from Heartbeat:
  • Heartbeat (every 30s): Aggregate metrics, system health, resource usage
  • Events (immediate): Point-in-time status updates, precise timestamps
See Event Emission for complete event types, payloads, and batching configuration.

Status Tracking System

The satellite tracks MCP server installation health through an 11-state status system that drives tool availability and automatic recovery. Status Values:
  • Installation lifecycle: provisioning, command_received, connecting, discovering_tools, syncing_tools
  • Healthy state: online (tools available)
  • Configuration changes: restarting
  • Failure states: offline, error, requires_reauth, permanently_failed
Status Integration:
  • Tool Filtering: Tools from non-online servers hidden from discovery
  • Auto-Recovery: Offline servers auto-recover when responsive
  • Event Emission: Status changes emitted immediately to backend
See Status Tracking for complete status lifecycle and transitions.

Log Capture System

The satellite captures and batches two types of logs for debugging and monitoring: server logs (stderr output) and request logs (tool execution with full request/response data). See Log Capture for buffering implementation, batching configuration, backend storage limits, and privacy controls.

Security Architecture

Current Security (No Authentication)

Session-Based Isolation:
  • Cryptographic Session IDs: 32-byte secure identifiers
  • Session Timeout: 30-minute automatic cleanup
  • Activity Tracking: Prevents session hijacking
  • Error Handling: Secure error responses

Planned Security Features

Team Isolation:
  • Linux Namespaces: PID, network, filesystem isolation
  • Process Groups: Separate process trees per team
  • User Isolation: Dedicated system users per team
Resource Management:
  • cgroups v2: CPU and memory limits
  • Resource Quotas: 0.1 CPU cores, 100MB RAM per process
  • Automatic Cleanup: 5-minute idle timeout
Authentication & Authorization:
  • OAuth 2.1 Resource Server: Backend token validation
  • Scope-Based Access: Fine-grained permissions
  • Team Context: Automatic team resolution from tokens

MCP Server Management

Dual MCP Server Support

stdio Subprocess Servers:
  • Local Execution: MCP servers as Node.js child processes
  • JSON-RPC Communication: Full MCP protocol 2025-11-05 over stdin/stdout
  • Process Lifecycle: Spawn, monitor, auto-restart (max 3 attempts), terminate
  • Team Isolation: Processes tracked by team_id with environment-based security
  • Tool Discovery: Automatic tool caching with namespacing
  • Resource Limits: nsjail in production (100MB RAM, 60s CPU, 50 processes)
  • Development Mode: Plain spawn() on all platforms for easy debugging
HTTP Proxy Servers:
  • External Endpoints: Proxy to remote MCP servers
  • Load Balancing: Distribute requests across instances
  • Health Monitoring: Endpoint availability checks
  • Tool Discovery: Automatic at startup from remote endpoints

Process Management

Lifecycle Operations:
Configuration → Spawn → Monitor → Health Check → Restart/Terminate
      │           │        │          │              │
      │           │        │          │              │
   Backend     Child     Metrics   Failure      Cleanup
   Command    Process   Collection Detection   Resources
Health Monitoring:
  • Process Health: CPU, memory, responsiveness
  • MCP Protocol: Tool availability, response times
  • Automatic Recovery: Restart failed processes
  • Resource Limits: Enforce team quotas

Technical Implementation Details

Current Implementation Specifications

  • Session ID Length: 32 bytes base64url encoded
  • Session Timeout: 30 minutes of inactivity
  • JSON-RPC Version: 2.0 strict compliance
  • HTTP Framework: Fastify with JSON Schema validation
  • Logging: Pino structured logging with operation tracking
  • Error Handling: Complete HTTP status code mapping

Planned Resource Jailing Specifications

  • CPU Limit: 0.1 cores per MCP server process
  • Memory Limit: 100MB RAM per MCP server process
  • Process Timeout: 5-minute idle timeout for automatic cleanup
  • Isolation Method: Linux namespaces + cgroups v2

Technology Stack

  • HTTP Framework: Fastify with @fastify/http-proxy (planned)
  • Process Communication: stdio JSON-RPC for local MCP servers (planned)
  • Authentication: OAuth 2.1 Resource Server with token introspection (planned)
  • Logging: Pino structured logging
  • Build System: TypeScript + Webpack

Development Setup

Clone and Setup:
git clone https://github.com/deploystackio/deploystack.git
cd deploystack/services/satellite
npm install
cp .env.example .env
npm run dev
Test MCP Transport:
# Test MCP connection
curl -X POST "http://localhost:3001/mcp" \
  -H "Content-Type: application/json" \
  -d '{"jsonrpc":"2.0","id":"1","method":"initialize","params":{}}'

# Test SSE streaming
curl -N -H "Accept: text/event-stream" "http://localhost:3001/mcp"
For testing the hierarchical router (tool discovery and execution), see Hierarchical Router Implementation. MCP Client Configuration:
{
  "mcpServers": {
    "deploystack-satellite": {
      "command": "npx",
      "args": ["@modelcontextprotocol/server-fetch"],
      "env": {
        "MCP_SERVER_URL": "http://localhost:3001/mcp"
      }
    }
  }
}

Implementation Status

The satellite service has completed MCP Transport Implementation and Backend Integration. Current implementation provides: MCP Transport Layer:
  • Complete MCP Transport Layer: SSE, SSE Messaging, Streamable HTTP
  • Session Management: Cryptographically secure with automatic cleanup
  • JSON-RPC 2.0 Compliance: Full protocol support with error handling
Backend Integration:
  • Command Polling Service: Adaptive polling with three modes (normal/immediate/error)
  • Dynamic Configuration Management: Replaces hardcoded MCP server configurations
  • Command Processing: HTTP MCP server management (spawn/kill/restart/health_check)
  • Heartbeat Service: Process status reporting and system metrics
  • Configuration Sync: Real-time MCP server configuration updates
  • Event System: Real-time event emission with automatic batching (13 event types including tool metadata)
Foundation Infrastructure:
  • HTTP Server: Fastify with Swagger documentation
  • Logging System: Pino with structured logging
  • Build Pipeline: TypeScript compilation and bundling
  • Development Workflow: Hot reload and code quality tools
  • Background Jobs System: Cron-like job management for recurring tasks
For details on the background jobs system, see Background Jobs System.