Skip to main content
The DeployStack backend implements satellite management APIs that handle registration, command orchestration, and configuration distribution. The system supports both global satellites (serving all teams) and team satellites (serving specific teams) through a polling-based communication architecture.

Implementation Status

Current Status: Fully implemented and operational The satellite communication system includes:
  • Satellite Registration: Working registration endpoint with API key generation
  • Command Orchestration: Complete command polling and result reporting endpoints
  • Configuration Management: Team-aware MCP server configuration distribution
  • Status Monitoring: Heartbeat collection with automatic satellite activation
  • Authentication: Argon2-based API key validation middleware

MCP Server Distribution Architecture

Global Satellite Model: Currently implemented approach where global satellites serve all teams with process isolation. Team-Aware Configuration Distribution:
  • Global satellites receive ALL team MCP server installations
  • Each team installation becomes a separate process with unique identifier
  • Process ID format: {server_slug}-{team_slug}-{installation_id}
  • Team-specific configurations (args, environment, headers) merged per installation
Configuration Merging Process:
  1. Template-level configuration (from MCP server definition)
  2. Team-level configuration (from team installation)
  3. User-level configuration (from user preferences)
  4. Final merged configuration sent to satellite
Multi-Transport Support:
  • stdio transport: Command and arguments for subprocess execution
  • http transport: URL and headers for HTTP proxy
  • sse transport: URL and headers for Server-Sent Events

Satellite Lifecycle Management

Registration Process:
  • Satellites register with backend and receive API keys
  • Initial status set to ‘inactive’ for security
  • API keys stored as Argon2 hashes in database
  • Satellite URL sent during registration (optional - auto-detected if not provided)
Activation Process:
  • Satellites send heartbeat after registration
  • Backend automatically sets status to ‘active’ on first heartbeat
  • Active satellites begin receiving actual commands
Command Processing:
  • Inactive satellites receive empty command arrays (no 403 errors)
  • Active satellites receive pending commands based on priority
  • Command results reported back to backend for status tracking

Architecture Pattern

Polling-Based Communication

Satellites use outbound-only HTTPS polling to communicate with the backend, making them compatible with restrictive corporate firewalls:
┌─────────────────┐    Outbound HTTPS     ┌─────────────────┐
│   Satellite     │ ───────────────────► │  DeployStack    │
│   (Edge)        │                      │  Backend        │
│                 │ ◄─────────────────── │  (Cloud)        │
└─────────────────┘    Command Response   └─────────────────┘

Communication Channels

The system uses three distinct communication patterns: Command Polling (Backend → Satellite):
  • Backend creates commands, satellites poll and execute
  • Adaptive intervals: 2-60 seconds based on priority
  • Used for: MCP server configuration, process management
Heartbeat (Satellite → Backend, Periodic):
  • Satellites report status every 30 seconds
  • Contains: System metrics, process counts, resource usage, satellite URL (first heartbeat only)
  • Used for: Health monitoring, capacity planning, satellite URL updates
Events (Satellite → Backend, Immediate):
  • Satellites emit events when actions occur, batched every 3 seconds
  • Contains: Point-in-time occurrences with precise timestamps
  • Used for: Real-time UI updates, audit trails, user notifications
  • See Satellite Events for detailed implementation

Dual Deployment Models

Global Satellites: Cloud-hosted by DeployStack team
  • Serve all teams with resource isolation
  • Managed through global satellite management endpoints
Team Satellites: Customer-deployed within corporate networks
  • Serve specific teams exclusively
  • Managed through team-scoped satellite management endpoints

Satellite Pairing Process

Security Architecture

The satellite pairing process implements a secure two-step JWT-based authentication system that prevents unauthorized satellite connections. For complete implementation details, see API Security - Registration Token Authentication. Step 1: Token Generation
  • Administrators generate temporary registration tokens through admin APIs
  • Scope-specific tokens (global vs team) with cryptographic signatures
  • Token management endpoints for generation, listing, and revocation
Step 2: Satellite Registration
  • Satellites authenticate using Authorization: Bearer deploystack_satellite_* headers
  • Backend validates JWT tokens with single-use consumption
  • Permanent API keys issued after successful token validation
  • Token consumed to prevent replay attacks
Note: All new satellite registrations require valid registration tokens. The open registration system has been secured.

Registration Middleware

The secure registration process is implemented through specialized middleware. For technical implementation details, see services/backend/src/middleware/registrationTokenMiddleware.ts. Key Security Features:
  • JWT signature verification with HMAC-SHA256
  • Scope validation (global vs team tokens)
  • Security event logging for failed attempts
  • Structured error responses with actionable instructions

Command Orchestration

Command Queue Architecture

The backend maintains a priority-based command queue system: Command Types:
  • spawn: Start new MCP server process
  • kill: Terminate MCP server process
  • restart: Restart existing MCP server
  • configure: Update MCP server configuration
  • health_check: Request process health status
Priority Levels:
  • immediate: High-priority commands requiring instant execution
  • high: Important commands processed within minutes
  • normal: Standard commands processed during regular polling
  • low: Background maintenance commands

Command Event Types

Configure commands include an event field in the payload for tracking purposes:
  • mcp_installation_created - New installation
  • mcp_installation_updated - Configuration change
  • mcp_installation_deleted - Installation removed
  • mcp_recovery - Server recovery
See Satellite Commands for complete event type documentation and usage patterns.

Adaptive Polling Strategy

Satellites adjust polling behavior based on backend signals: Polling Modes:
  • Immediate Mode: 2-second intervals for urgent commands
  • Normal Mode: 30-second intervals for standard operations
  • Backoff Mode: Exponential backoff during errors or low activity
Optimization Features:
  • Conditional polling based on last poll timestamp
  • Command batching to reduce API calls
  • Cache headers for efficient bandwidth usage
  • Circuit breaker patterns for error recovery

Command Lifecycle

Command Flow:
  1. User action triggers command creation in backend
  2. Command added to priority queue with team context
  3. Satellite polls and retrieves pending commands
  4. Satellite executes command with team isolation
  5. Satellite reports execution results back to backend
  6. Backend updates command status and notifies user interface
Team Context Integration:
  • All commands include team scope information
  • Team satellites only receive commands for their team
  • Global satellites process commands with team isolation
  • Audit trail with team attribution

Status Monitoring

Heartbeat System

Satellites report health and performance metrics: System Metrics:
  • CPU usage percentage and memory consumption
  • Disk usage and network connectivity status
  • Process count and resource utilization
  • Uptime and stability indicators
  • Satellite URL (first heartbeat after startup only) - Updates public URL for satellite discovery
Process Metrics:
  • Individual MCP server process status
  • Health indicators (healthy/unhealthy/unknown)
  • Performance metrics (request count, response times)
  • Resource consumption per process

Real-Time Status Tracking

The backend provides real-time satellite status information: Satellite Health Monitoring:
  • Connection status and last heartbeat timestamps
  • System resource usage trends
  • Process health aggregation
  • Alert generation for issues
Performance Analytics:
  • Historical performance data collection
  • Usage pattern analysis for capacity planning
  • Team-specific metrics and reporting
  • Audit trail generation

Configuration Management

Dynamic Configuration Updates

Satellites retrieve configuration updates without requiring restarts: Configuration Categories:
  • Polling Settings: Interval configuration and optimization parameters
  • Resource Limits: CPU, memory, and process count restrictions
  • Team Settings: Team-specific policies and allowed MCP servers
  • Security Policies: Access control and compliance requirements
Configuration Distribution:
  • Push-based updates through command queue
  • Pull-based configuration refresh during polling
  • Version-controlled configuration management
  • Rollback capabilities for configuration errors

Team-Aware Configuration

Configuration respects team boundaries and isolation: Global Satellite Configuration:
  • Platform-wide settings and resource allocation
  • Multi-tenant isolation policies
  • Global resource limits and quotas
  • Cross-team security boundaries
Team Satellite Configuration:
  • Team-specific MCP server configurations
  • Custom resource limits per team
  • Team-defined security policies
  • Internal resource access settings

Frontend API Endpoints

The backend provides REST and SSE endpoints for frontend access to installation status, logs, and requests.

Status & Monitoring Endpoints

GET /api/teams/{teamId}/mcp/installations/{installationId}/status
  • Returns current installation status, status message, and last update timestamp
  • Used by frontend for real-time status badges and progress indicators
GET /api/teams/{teamId}/mcp/installations/{installationId}/logs
  • Returns paginated server logs (stderr output, connection errors)
  • Query params: limit, offset for pagination
  • Limited to 100 lines per installation (enforced by cleanup cron job)
GET /api/teams/{teamId}/mcp/installations/{installationId}/requests
  • Returns paginated request logs (tool execution history)
  • Includes request params, duration, success status
  • Response data included if request_logging_enabled=true
GET /api/teams/{teamId}/mcp/installations/{installationId}/requests/{requestId}
  • Returns detailed request log for specific execution
  • Includes full request/response payloads when available

Settings Management

PATCH /api/teams/{teamId}/mcp/installations/{installationId}/settings
  • Updates installation settings (stored in mcpServerInstallations.settings jsonb column)
  • Settings distributed to satellites via config endpoint
  • Current settings:
    • request_logging_enabled (boolean) - Controls capture of tool responses

Real-Time Streaming (SSE)

GET /api/teams/{teamId}/mcp/installations/{installationId}/logs/stream
  • Server-Sent Events endpoint for real-time log streaming
  • Frontend subscribes for live stderr output
  • Auto-reconnects on connection loss
GET /api/teams/{teamId}/mcp/installations/{installationId}/requests/stream
  • Server-Sent Events endpoint for real-time request log streaming
  • Frontend subscribes for live tool execution updates
  • Includes duration, status, and optionally response data
SSE vs REST Comparison:
FeatureREST EndpointsSSE Endpoints
Use CaseHistorical data, paginationReal-time updates
ConnectionRequest/responsePersistent connection
Data FlowPull (client requests)Push (server sends)
Frontend UsageInitial load, manual refreshLive monitoring
SSE Controller Implementation: services/backend/src/controllers/mcp/sse.controller.ts Routes Implementation: services/backend/src/routes/api/teams/mcp/installations.routes.ts

Health Check & Recovery Systems

Cumulative Health Check System

Purpose: Template-level health aggregation across all installations of an MCP server. McpHealthCheckService (services/backend/src/services/mcp-health-check.service.ts):
  • Aggregates health status from all installations of each MCP server template
  • Updates mcpServers.health_status based on installation health
  • Provides template-level health visibility in admin dashboard
Cron Job: mcp-health-check runs every 3 minutes
  • Implementation: services/backend/src/jobs/mcp-health-check.job.ts
  • Checks all MCP server templates
  • Updates template health status for admin visibility

Credential Validation System

Purpose: Per-installation OAuth token validation to detect expired/revoked credentials. McpCredentialValidationWorker (services/backend/src/workers/mcp-credential-validation.worker.ts):
  • Validates OAuth tokens for each installation
  • Sends health_check command to satellite with check_type: 'credential_validation'
  • Satellite performs OAuth validation and reports status
Cron Job: mcp-credential-validation runs every 1 minute
  • Implementation: services/backend/src/jobs/mcp-credential-validation.job.ts
  • Validates installations on 15-minute rotation
  • Triggers requires_reauth status on validation failure
Health Check Command Payload:
{
  "commandType": "health_check",
  "priority": "immediate",
  "payload": {
    "check_type": "credential_validation",
    "installation_id": "inst_123",
    "team_id": "team_xyz"
  }
}
Satellite validates credentials and emits mcp.server.status_changed with status:
  • online - Credentials valid
  • requires_reauth - OAuth token expired/revoked
  • error - Validation failed with error

Auto-Recovery System

Recovery Trigger:
  • Health check system detects offline installations
  • Backend calls notifyMcpRecovery(installation_id, team_id)
  • Sends command to satellite: Set status=connecting, rediscover tools
  • Status progression: offline → connecting → discovering_tools → online
Tool Execution Recovery:
  • Satellite detects recovery during tool execution (offline server responds)
  • Emits immediate status change event (doesn’t wait for health check)
  • Triggers asynchronous re-discovery
For satellite-side recovery implementation, see Satellite Recovery System.

Background Cron Jobs

The backend runs three MCP-related cron jobs for maintenance and monitoring: cleanup-mcp-server-logs:
  • Schedule: Every 10 minutes
  • Purpose: Enforce 100-line limit per installation in mcpServerLogs table
  • Action: Deletes oldest logs beyond 100-line limit
  • Implementation: services/backend/src/jobs/cleanup-mcp-server-logs.job.ts
mcp-health-check:
  • Schedule: Every 3 minutes
  • Purpose: Template-level health aggregation
  • Action: Updates mcpServers.health_status column
  • Implementation: services/backend/src/jobs/mcp-health-check.job.ts
mcp-credential-validation:
  • Schedule: Every 1 minute
  • Purpose: Detect expired/revoked OAuth tokens
  • Action: Sends health_check commands to satellites
  • Implementation: services/backend/src/jobs/mcp-credential-validation.job.ts

Database Schema Integration

Core Table Structure

The satellite system integrates with existing DeployStack schema through 5 specialized tables. For detailed schema definitions, see services/backend/src/db/schema.ts. Satellite Registry (satellites):
  • Central registration of all satellites
  • Type classification (global/team) and ownership
  • Capability tracking and status monitoring
  • API key management and authentication
  • Satellite URL tracking - Publicly accessible URL updated during registration and first heartbeat
Command Queue (satelliteCommands):
  • Priority-based command orchestration
  • Team context and correlation tracking
  • Expiration and retry management
  • Command lifecycle tracking
Process Tracking (satelliteProcesses):
  • Real-time MCP server process monitoring
  • Health status and performance metrics
  • Team isolation and resource usage
  • Integration with existing MCP configuration system
Usage Analytics (satelliteUsageLogs):
  • Audit trail for compliance
  • User attribution and team tracking
  • Performance analytics and billing data
  • Device tracking for enterprise security
Health Monitoring (satelliteHeartbeats):
  • System metrics and resource monitoring
  • Process health aggregation
  • Alert generation and notification triggers
  • Historical health trend analysis

New Columns Added (Status & Health Tracking System)

mcpServerInstallations table:
  • status (text) - Current installation status (11 possible values)
  • status_message (text, nullable) - Human-readable status context or error details
  • status_updated_at (timestamp) - Last status change timestamp
  • last_health_check_at (timestamp, nullable) - Last health check execution time
  • last_credential_check_at (timestamp, nullable) - Last credential validation time
  • settings (jsonb, nullable) - Generic settings object (e.g., request_logging_enabled)
mcpServers table:
  • health_status (text, nullable) - Template-level aggregated health status
  • last_health_check_at (timestamp, nullable) - Last template health check time
  • health_check_error (text, nullable) - Last health check error message
mcpServerLogs table:
  • Stores batched stderr logs from satellites
  • 100-line limit per installation (enforced by cleanup cron job)
  • Fields: installation_id, team_id, log_level, message, timestamp
mcpRequestLogs table:
  • Stores batched tool execution logs
  • tool_response (jsonb, nullable) - MCP server response data
  • Privacy control: Only captured when request_logging_enabled=true
  • Fields: installation_id, team_id, tool_name, request_params, tool_response, duration_ms, success, error_message, timestamp
mcpToolMetadata table:
  • Stores discovered tools with token counts
  • Used for hierarchical router token savings calculations
  • Fields: installation_id, server_slug, tool_name, description, input_schema, token_count, discovered_at

Team Isolation in Data Model

All satellite data respects team boundaries: Team-Scoped Data:
  • Team satellites linked to specific teams
  • Process isolation per team context
  • Usage logs with team attribution
  • Configuration scoped to team access
Global Data with Team Context:
  • Global satellites serve all teams with isolation
  • Cross-team usage tracking and analytics
  • Team-aware resource allocation
  • Compliance reporting per team

Authentication & Security

Multi-Layer Security Model

Registration Security:
  • Temporary JWT tokens for initial pairing
  • Scope validation preventing privilege escalation
  • Single-use tokens with automatic expiration
  • Audit trail for security compliance
Operational Security:
  • Permanent API keys for ongoing communication
  • Request authentication and authorization
  • Rate limiting and abuse prevention
  • IP whitelisting support for team satellites
Team Isolation Security:
  • Team boundary enforcement
  • Resource isolation and access control
  • Cross-team data leakage prevention
  • Compliance with enterprise security policies

Role-Based Access Control Integration

The satellite system integrates with DeployStack’s existing role framework: global_admin:
  • Satellite system oversight
  • Global satellite registration and management
  • Cross-team analytics and monitoring
  • System-wide configuration control
team_admin:
  • Team satellite registration and management
  • Team-scoped MCP server installation
  • Team resource monitoring and configuration
  • Team member access control
team_user:
  • Satellite-hosted MCP server usage
  • Team satellite status visibility
  • Personal usage analytics access
global_user:
  • Team satellite registration within memberships
  • Cross-team satellite usage through teams
  • Limited administrative capabilities

Integration Points

Existing DeployStack Systems

User Management Integration:
  • Leverages existing authentication and session management
  • Integrates with current permission and role systems
  • Uses established user and team membership APIs
  • Maintains consistency with platform security model
MCP Configuration Integration:
  • Builds on existing MCP server installation system
  • Extends current team-based configuration management
  • Integrates with established credential management
  • Maintains compatibility with existing MCP workflows
Monitoring Integration:
  • Uses existing structured logging infrastructure
  • Integrates with current metrics collection system
  • Leverages established alerting and notification systems
  • Maintains consistency with platform observability

Development Implementation

Route Structure

Satellite communication endpoints are organized in services/backend/src/routes/satellites/:
satellites/
├── index.ts              # Route registration
├── register.ts           # Satellite registration endpoint
├── commands.ts           # Command polling and result reporting
├── config.ts             # Configuration distribution
├── heartbeat.ts          # Health monitoring and status updates
└── manage/               # Management endpoints for frontend
    ├── list.ts          # Satellite listing
    └── status.ts        # Satellite status queries

Authentication Middleware

Satellite authentication uses dedicated middleware in services/backend/src/middleware/satelliteAuthMiddleware.ts: Key Features:
  • Argon2 hash verification for API key validation
  • Satellite context injection for route handlers
  • Dual authentication support (user cookies + satellite API keys)
  • Comprehensive error handling and logging
Usage Pattern:
import { requireSatelliteAuth } from '../../middleware/satelliteAuthMiddleware';

server.get('/satellites/:satelliteId/commands', {
  preValidation: [requireSatelliteAuth()],
  // Route implementation
});

Database Integration

The satellite system extends the existing database schema with 5 specialized tables: Schema Location: services/backend/src/db/schema.ts Table Relationships:
  • satellites table links to existing teams and authUser tables
  • satelliteProcesses table references mcpServerInstallations for team context
  • satelliteCommands table includes team context for command execution
  • All tables use existing foreign key relationships for data integrity

Configuration Query Implementation

The configuration endpoint implements complex queries to merge team-specific MCP server configurations: Query Strategy:
  • Join mcpServerInstallations, mcpServers, and teams tables
  • Global satellites: Query ALL team installations
  • Team satellites: Query only specific team installations
  • JSON field parsing with comprehensive error handling
Configuration Merging Logic:
// Parse template and team configurations
const templateArgs = JSON.parse(installation.template_args || '[]');
const teamArgs = JSON.parse(installation.team_args || '[]');
const templateEnv = JSON.parse(installation.template_env || '{}');
const teamEnv = JSON.parse(installation.team_env || '{}');

// Merge configurations with team overrides
const finalArgs = [...templateArgs, ...teamArgs];
const finalEnv = { ...templateEnv, ...teamEnv };

Error Handling Patterns

Graceful Degradation:
  • Inactive satellites receive empty command arrays instead of 403 errors
  • Invalid JSON configurations are skipped with warning logs
  • Failed satellite authentication returns 401 with structured error messages
Comprehensive Logging:
  • Structured logging with operation identifiers
  • Error context preservation for debugging
  • Performance metrics collection (response times, success rates)

Development Workflow

Local Development Setup:
# Backend setup
cd services/backend
npm install
npm run dev  # Starts on http://localhost:3000

# Satellite setup (separate terminal)
cd services/satellite  
npm install
npm run dev  # Starts on http://localhost:3001
Testing Satellite Communication:
  1. Start backend server
  2. Start satellite (automatically registers)
  3. Monitor logs for successful polling and configuration retrieval
  4. Use database tools to inspect satellite tables and command queue
Database Inspection:
# View registered satellites
psql deploystack
> SELECT id, name, satellite_type, status FROM satellites;

# View MCP server installations
> SELECT installation_name, team_id FROM "mcpServerInstallations";

API Documentation

For detailed API endpoints, request/response formats, and authentication patterns, see the API Specification generated from the backend OpenAPI schema. For detailed satellite architecture and implementation: