Implementation Status
Current Status: Fully implemented and operational The satellite communication system includes:- Satellite Registration: Working registration endpoint with API key generation
- Command Orchestration: Complete command polling and result reporting endpoints
- Configuration Management: Team-aware MCP server configuration distribution
- Status Monitoring: Heartbeat collection with automatic satellite activation
- Authentication: Argon2-based API key validation middleware
MCP Server Distribution Architecture
Global Satellite Model: Currently implemented approach where global satellites serve all teams with process isolation. Team-Aware Configuration Distribution:- Global satellites receive ALL team MCP server installations
- Each team installation becomes a separate process with unique identifier
- Process ID format:
{server_slug}-{team_slug}-{installation_id} - Team-specific configurations (args, environment, headers) merged per installation
- Template-level configuration (from MCP server definition)
- Team-level configuration (from team installation)
- User-level configuration (from user preferences)
- Final merged configuration sent to satellite
stdiotransport: Command and arguments for subprocess executionhttptransport: URL and headers for HTTP proxyssetransport: URL and headers for Server-Sent Events
Satellite Lifecycle Management
Registration Process:- Satellites register with backend and receive API keys
- Initial status set to ‘inactive’ for security
- API keys stored as Argon2 hashes in database
- Satellite URL sent during registration (optional - auto-detected if not provided)
- Satellites send heartbeat after registration
- Backend automatically sets status to ‘active’ on first heartbeat
- Active satellites begin receiving actual commands
- Inactive satellites receive empty command arrays (no 403 errors)
- Active satellites receive pending commands based on priority
- Command results reported back to backend for status tracking
Architecture Pattern
Polling-Based Communication
Satellites use outbound-only HTTPS polling to communicate with the backend, making them compatible with restrictive corporate firewalls:Communication Channels
The system uses three distinct communication patterns: Command Polling (Backend → Satellite):- Backend creates commands, satellites poll and execute
- Adaptive intervals: 2-60 seconds based on priority
- Used for: MCP server configuration, process management
- Satellites report status every 30 seconds
- Contains: System metrics, process counts, resource usage, satellite URL (first heartbeat only)
- Used for: Health monitoring, capacity planning, satellite URL updates
- Satellites emit events when actions occur, batched every 3 seconds
- Contains: Point-in-time occurrences with precise timestamps
- Used for: Real-time UI updates, audit trails, user notifications
- See Satellite Events for detailed implementation
Dual Deployment Models
Global Satellites: Cloud-hosted by DeployStack team- Serve all teams with resource isolation
- Managed through global satellite management endpoints
- Serve specific teams exclusively
- Managed through team-scoped satellite management endpoints
Satellite Pairing Process
Security Architecture
The satellite pairing process implements a secure two-step JWT-based authentication system that prevents unauthorized satellite connections. For complete implementation details, see API Security - Registration Token Authentication. Step 1: Token Generation- Administrators generate temporary registration tokens through admin APIs
- Scope-specific tokens (global vs team) with cryptographic signatures
- Token management endpoints for generation, listing, and revocation
- Satellites authenticate using
Authorization: Bearer deploystack_satellite_*headers - Backend validates JWT tokens with single-use consumption
- Permanent API keys issued after successful token validation
- Token consumed to prevent replay attacks
Registration Middleware
The secure registration process is implemented through specialized middleware. For technical implementation details, seeservices/backend/src/middleware/registrationTokenMiddleware.ts.
Key Security Features:
- JWT signature verification with HMAC-SHA256
- Scope validation (global vs team tokens)
- Security event logging for failed attempts
- Structured error responses with actionable instructions
Command Orchestration
Command Queue Architecture
The backend maintains a priority-based command queue system: Command Types:spawn: Start new MCP server processkill: Terminate MCP server processrestart: Restart existing MCP serverconfigure: Update MCP server configurationhealth_check: Request process health status
immediate: High-priority commands requiring instant executionhigh: Important commands processed within minutesnormal: Standard commands processed during regular pollinglow: Background maintenance commands
Command Event Types
Configure commands include anevent field in the payload for tracking purposes:
mcp_installation_created- New installationmcp_installation_updated- Configuration changemcp_installation_deleted- Installation removedmcp_recovery- Server recovery
Adaptive Polling Strategy
Satellites adjust polling behavior based on backend signals: Polling Modes:- Immediate Mode: 2-second intervals for urgent commands
- Normal Mode: 30-second intervals for standard operations
- Backoff Mode: Exponential backoff during errors or low activity
- Conditional polling based on last poll timestamp
- Command batching to reduce API calls
- Cache headers for efficient bandwidth usage
- Circuit breaker patterns for error recovery
Command Lifecycle
Command Flow:- User action triggers command creation in backend
- Command added to priority queue with team context
- Satellite polls and retrieves pending commands
- Satellite executes command with team isolation
- Satellite reports execution results back to backend
- Backend updates command status and notifies user interface
- All commands include team scope information
- Team satellites only receive commands for their team
- Global satellites process commands with team isolation
- Audit trail with team attribution
Status Monitoring
Heartbeat System
Satellites report health and performance metrics: System Metrics:- CPU usage percentage and memory consumption
- Disk usage and network connectivity status
- Process count and resource utilization
- Uptime and stability indicators
- Satellite URL (first heartbeat after startup only) - Updates public URL for satellite discovery
- Individual MCP server process status
- Health indicators (healthy/unhealthy/unknown)
- Performance metrics (request count, response times)
- Resource consumption per process
Real-Time Status Tracking
The backend provides real-time satellite status information: Satellite Health Monitoring:- Connection status and last heartbeat timestamps
- System resource usage trends
- Process health aggregation
- Alert generation for issues
- Historical performance data collection
- Usage pattern analysis for capacity planning
- Team-specific metrics and reporting
- Audit trail generation
Configuration Management
Dynamic Configuration Updates
Satellites retrieve configuration updates without requiring restarts: Configuration Categories:- Polling Settings: Interval configuration and optimization parameters
- Resource Limits: CPU, memory, and process count restrictions
- Team Settings: Team-specific policies and allowed MCP servers
- Security Policies: Access control and compliance requirements
- Push-based updates through command queue
- Pull-based configuration refresh during polling
- Version-controlled configuration management
- Rollback capabilities for configuration errors
Team-Aware Configuration
Configuration respects team boundaries and isolation: Global Satellite Configuration:- Platform-wide settings and resource allocation
- Multi-tenant isolation policies
- Global resource limits and quotas
- Cross-team security boundaries
- Team-specific MCP server configurations
- Custom resource limits per team
- Team-defined security policies
- Internal resource access settings
Frontend API Endpoints
The backend provides REST and SSE endpoints for frontend access to installation status, logs, and requests.Status & Monitoring Endpoints
GET/api/teams/{teamId}/mcp/installations/{installationId}/status
- Returns current installation status, status message, and last update timestamp
- Used by frontend for real-time status badges and progress indicators
/api/teams/{teamId}/mcp/installations/{installationId}/logs
- Returns paginated server logs (stderr output, connection errors)
- Query params:
limit,offsetfor pagination - Limited to 100 lines per installation (enforced by cleanup cron job)
/api/teams/{teamId}/mcp/installations/{installationId}/requests
- Returns paginated request logs (tool execution history)
- Includes request params, duration, success status
- Response data included if
request_logging_enabled=true
/api/teams/{teamId}/mcp/installations/{installationId}/requests/{requestId}
- Returns detailed request log for specific execution
- Includes full request/response payloads when available
Settings Management
PATCH/api/teams/{teamId}/mcp/installations/{installationId}/settings
- Updates installation settings (stored in
mcpServerInstallations.settingsjsonb column) - Settings distributed to satellites via config endpoint
- Current settings:
request_logging_enabled(boolean) - Controls capture of tool responses
Real-Time Streaming (SSE)
GET/api/teams/{teamId}/mcp/installations/{installationId}/logs/stream
- Server-Sent Events endpoint for real-time log streaming
- Frontend subscribes for live stderr output
- Auto-reconnects on connection loss
/api/teams/{teamId}/mcp/installations/{installationId}/requests/stream
- Server-Sent Events endpoint for real-time request log streaming
- Frontend subscribes for live tool execution updates
- Includes duration, status, and optionally response data
| Feature | REST Endpoints | SSE Endpoints |
|---|---|---|
| Use Case | Historical data, pagination | Real-time updates |
| Connection | Request/response | Persistent connection |
| Data Flow | Pull (client requests) | Push (server sends) |
| Frontend Usage | Initial load, manual refresh | Live monitoring |
services/backend/src/controllers/mcp/sse.controller.ts
Routes Implementation: services/backend/src/routes/api/teams/mcp/installations.routes.ts
Health Check & Recovery Systems
Cumulative Health Check System
Purpose: Template-level health aggregation across all installations of an MCP server. McpHealthCheckService (services/backend/src/services/mcp-health-check.service.ts):
- Aggregates health status from all installations of each MCP server template
- Updates
mcpServers.health_statusbased on installation health - Provides template-level health visibility in admin dashboard
mcp-health-check runs every 3 minutes
- Implementation:
services/backend/src/jobs/mcp-health-check.job.ts - Checks all MCP server templates
- Updates template health status for admin visibility
Credential Validation System
Purpose: Per-installation OAuth token validation to detect expired/revoked credentials. McpCredentialValidationWorker (services/backend/src/workers/mcp-credential-validation.worker.ts):
- Validates OAuth tokens for each installation
- Sends
health_checkcommand to satellite withcheck_type: 'credential_validation' - Satellite performs OAuth validation and reports status
mcp-credential-validation runs every 1 minute
- Implementation:
services/backend/src/jobs/mcp-credential-validation.job.ts - Validates installations on 15-minute rotation
- Triggers
requires_reauthstatus on validation failure
mcp.server.status_changed with status:
online- Credentials validrequires_reauth- OAuth token expired/revokederror- Validation failed with error
Auto-Recovery System
Recovery Trigger:- Health check system detects offline installations
- Backend calls
notifyMcpRecovery(installation_id, team_id) - Sends command to satellite: Set status=
connecting, rediscover tools - Status progression: offline → connecting → discovering_tools → online
- Satellite detects recovery during tool execution (offline server responds)
- Emits immediate status change event (doesn’t wait for health check)
- Triggers asynchronous re-discovery
Background Cron Jobs
The backend runs three MCP-related cron jobs for maintenance and monitoring: cleanup-mcp-server-logs:- Schedule: Every 10 minutes
- Purpose: Enforce 100-line limit per installation in
mcpServerLogstable - Action: Deletes oldest logs beyond 100-line limit
- Implementation:
services/backend/src/jobs/cleanup-mcp-server-logs.job.ts
- Schedule: Every 3 minutes
- Purpose: Template-level health aggregation
- Action: Updates
mcpServers.health_statuscolumn - Implementation:
services/backend/src/jobs/mcp-health-check.job.ts
- Schedule: Every 1 minute
- Purpose: Detect expired/revoked OAuth tokens
- Action: Sends
health_checkcommands to satellites - Implementation:
services/backend/src/jobs/mcp-credential-validation.job.ts
Database Schema Integration
Core Table Structure
The satellite system integrates with existing DeployStack schema through 5 specialized tables. For detailed schema definitions, seeservices/backend/src/db/schema.ts.
Satellite Registry (satellites):
- Central registration of all satellites
- Type classification (global/team) and ownership
- Capability tracking and status monitoring
- API key management and authentication
- Satellite URL tracking - Publicly accessible URL updated during registration and first heartbeat
satelliteCommands):
- Priority-based command orchestration
- Team context and correlation tracking
- Expiration and retry management
- Command lifecycle tracking
satelliteProcesses):
- Real-time MCP server process monitoring
- Health status and performance metrics
- Team isolation and resource usage
- Integration with existing MCP configuration system
satelliteUsageLogs):
- Audit trail for compliance
- User attribution and team tracking
- Performance analytics and billing data
- Device tracking for enterprise security
satelliteHeartbeats):
- System metrics and resource monitoring
- Process health aggregation
- Alert generation and notification triggers
- Historical health trend analysis
New Columns Added (Status & Health Tracking System)
mcpServerInstallations table:status(text) - Current installation status (11 possible values)status_message(text, nullable) - Human-readable status context or error detailsstatus_updated_at(timestamp) - Last status change timestamplast_health_check_at(timestamp, nullable) - Last health check execution timelast_credential_check_at(timestamp, nullable) - Last credential validation timesettings(jsonb, nullable) - Generic settings object (e.g.,request_logging_enabled)
health_status(text, nullable) - Template-level aggregated health statuslast_health_check_at(timestamp, nullable) - Last template health check timehealth_check_error(text, nullable) - Last health check error message
- Stores batched stderr logs from satellites
- 100-line limit per installation (enforced by cleanup cron job)
- Fields:
installation_id,team_id,log_level,message,timestamp
- Stores batched tool execution logs
tool_response(jsonb, nullable) - MCP server response data- Privacy control: Only captured when
request_logging_enabled=true - Fields:
installation_id,team_id,tool_name,request_params,tool_response,duration_ms,success,error_message,timestamp
- Stores discovered tools with token counts
- Used for hierarchical router token savings calculations
- Fields:
installation_id,server_slug,tool_name,description,input_schema,token_count,discovered_at
Team Isolation in Data Model
All satellite data respects team boundaries: Team-Scoped Data:- Team satellites linked to specific teams
- Process isolation per team context
- Usage logs with team attribution
- Configuration scoped to team access
- Global satellites serve all teams with isolation
- Cross-team usage tracking and analytics
- Team-aware resource allocation
- Compliance reporting per team
Authentication & Security
Multi-Layer Security Model
Registration Security:- Temporary JWT tokens for initial pairing
- Scope validation preventing privilege escalation
- Single-use tokens with automatic expiration
- Audit trail for security compliance
- Permanent API keys for ongoing communication
- Request authentication and authorization
- Rate limiting and abuse prevention
- IP whitelisting support for team satellites
- Team boundary enforcement
- Resource isolation and access control
- Cross-team data leakage prevention
- Compliance with enterprise security policies
Role-Based Access Control Integration
The satellite system integrates with DeployStack’s existing role framework: global_admin:- Satellite system oversight
- Global satellite registration and management
- Cross-team analytics and monitoring
- System-wide configuration control
- Team satellite registration and management
- Team-scoped MCP server installation
- Team resource monitoring and configuration
- Team member access control
- Satellite-hosted MCP server usage
- Team satellite status visibility
- Personal usage analytics access
- Team satellite registration within memberships
- Cross-team satellite usage through teams
- Limited administrative capabilities
Integration Points
Existing DeployStack Systems
User Management Integration:- Leverages existing authentication and session management
- Integrates with current permission and role systems
- Uses established user and team membership APIs
- Maintains consistency with platform security model
- Builds on existing MCP server installation system
- Extends current team-based configuration management
- Integrates with established credential management
- Maintains compatibility with existing MCP workflows
- Uses existing structured logging infrastructure
- Integrates with current metrics collection system
- Leverages established alerting and notification systems
- Maintains consistency with platform observability
Development Implementation
Route Structure
Satellite communication endpoints are organized inservices/backend/src/routes/satellites/:
Authentication Middleware
Satellite authentication uses dedicated middleware inservices/backend/src/middleware/satelliteAuthMiddleware.ts:
Key Features:
- Argon2 hash verification for API key validation
- Satellite context injection for route handlers
- Dual authentication support (user cookies + satellite API keys)
- Comprehensive error handling and logging
Database Integration
The satellite system extends the existing database schema with 5 specialized tables: Schema Location:services/backend/src/db/schema.ts
Table Relationships:
satellitestable links to existingteamsandauthUsertablessatelliteProcessestable referencesmcpServerInstallationsfor team contextsatelliteCommandstable includes team context for command execution- All tables use existing foreign key relationships for data integrity
Configuration Query Implementation
The configuration endpoint implements complex queries to merge team-specific MCP server configurations: Query Strategy:- Join
mcpServerInstallations,mcpServers, andteamstables - Global satellites: Query ALL team installations
- Team satellites: Query only specific team installations
- JSON field parsing with comprehensive error handling
Error Handling Patterns
Graceful Degradation:- Inactive satellites receive empty command arrays instead of 403 errors
- Invalid JSON configurations are skipped with warning logs
- Failed satellite authentication returns 401 with structured error messages
- Structured logging with operation identifiers
- Error context preservation for debugging
- Performance metrics collection (response times, success rates)
Development Workflow
Local Development Setup:- Start backend server
- Start satellite (automatically registers)
- Monitor logs for successful polling and configuration retrieval
- Use database tools to inspect satellite tables and command queue
API Documentation
For detailed API endpoints, request/response formats, and authentication patterns, see the API Specification generated from the backend OpenAPI schema.Related Documentation
For detailed satellite architecture and implementation:- Satellite Events - Real-time event processing system
- API Security - Security patterns and authorization
- Database Management - Schema and data management
- OAuth2 Server - OAuth2 implementation details

