Skip to content

Market Maker Service

The Market Maker Service is a production-grade Go microservice responsible for providing liquidity to the TradeX exchange through automated market making strategies.

The Market Maker Service handles:

  • Automated Quoting: Computing and placing bid/ask orders based on real-time market data
  • Inventory Management: Tracking positions and adjusting quotes based on inventory skew
  • Risk Management: Enforcing strict position and loss limits with an automatic kill switch
  • Order Lifecycle: Placing, managing, and canceling orders via the Order Service
  • Shadow Mode: Testing strategies without placing real orders
  • Real-time Monitoring: Exposing metrics and status via REST API
graph TB Kafka[Kafka Events] --> MM[Market Maker Service] MM --> OrderSvc[Order Service HTTP API] OrderSvc --> MatchingEngine[Matching Engine] MatchingEngine --> Kafka MM --> MetadataSvc[Metadata Service gRPC] MM --> PG[(PostgreSQL)] MM --> Prom[Prometheus Metrics]

The service uses an actor model where each trading symbol has its own dedicated worker:

  • Single-threaded per symbol: No shared mutable state across symbols
  • Event-driven: Workers process messages from a dedicated event queue
  • Isolated failure domains: If one symbol crashes, others continue operating
  • 1000-message queue per worker: Provides buffering for high-volume events

This architecture ensures thread safety without locks and enables independent lifecycle management per symbol.

  • Actor Worker: Core market making logic, one per symbol (event loop, quote generation, order management)
  • Quoting Engine: Calculates bid/ask prices with inventory-adjusted spread pricing
  • Risk Guards: Pre-trade checks (position limits, loss limits) and kill switch management
  • Inventory Tracker: Maintains position and P&L tracking from fill events
  • Config Repository: PostgreSQL storage for per-symbol MM configurations (uses SQLC)
  • State Repository: PostgreSQL storage for current symbol states (ACTIVE/PAUSED/HALTED)
  • Position Repository: PostgreSQL storage for positions and P&L (uses SQLC)
  • Order Repository: PostgreSQL storage for open MM orders (uses SQLC)
  • Fill Repository: PostgreSQL storage for fill history (uses SQLC)
  • Outbox Repository: Transactional outbox pattern for Kafka events
  • Kafka Router: Routes incoming Kafka messages to appropriate symbol workers
  • Order Book Processor: Processes order book snapshots and deltas
  • Trade Processor: Processes trade events and updates P&L
  • Instrument Processor: Handles instrument lifecycle events (create, update, halt, resume)
  1. Market data events (order book, trades, mark price) arrive via Kafka
  2. Kafka router dispatches events to appropriate symbol worker
  3. Worker updates internal order book and pricing state
  4. Quoting engine computes new bid/ask prices with inventory adjustment
  5. Pre-trade risk checks validate proposed orders
  6. Worker places orders via Order Service HTTP API
  7. Matching engine processes orders and emits fill events
  8. Worker consumes fill events, updates inventory, and adjusts quotes
  9. Post-trade risk checks monitor exposure and P&L
  1. Risk guard detects violation (exposure, loss, lag, reject storm)
  2. Kill switch triggered, state set to HALTED
  3. All open orders canceled immediately
  4. Risk event published to Kafka (mm.risk.triggered.v1)
  5. Worker stops placing new orders
  6. Manual intervention required to reset via /reset endpoint

The service uses inventory-adjusted spread pricing to manage risk:

1. inv_norm = clamp(position / max_position, -1.0, +1.0)
2. bias = inv_norm × inventory_skew_factor × (spread / 2)
3. spread = mid × (base_spread_bps / 10000)
4. bid = mid - (spread / 2) - bias - (level × level_spacing)
5. ask = mid + (spread / 2) - bias + (level × level_spacing)
  • When long inventory (positive position):

    • bias is positive
    • Bids are lowered (discourages buying)
    • Asks are lowered (encourages selling)
  • When short inventory (negative position):

    • bias is negative
    • Bids are raised (encourages buying)
    • Asks are raised (discourages selling)
  • Multi-level quoting:

    • Each level adds level_spacing_bps to the spread
    • Provides depth at progressively worse prices

All quotes are automatically rounded to:

  • Tick size: Minimum price increment (from instrument metadata)
  • Lot size: Minimum quantity increment (from instrument metadata)

Before placing any order, the service validates:

  • Position Limit: post_fill_position ≤ max_position_usdt
  • Loss Limit: total_pnl ≥ -max_loss_usdt
  • Symbol State: Must be in ACTIVE state
  • Instrument Status: Not halted by exchange

The kill switch activates automatically on:

TriggerConditionDefault Threshold
Exposure BreachPosition exceeds max_positionConfigurable per symbol
Loss BreachTotal P&L exceeds max_lossConfigurable per symbol
Market HaltExchange halts the symbolImmediate
Kafka LagConsumer lag exceeds threshold10 seconds
Reject StormToo many rejections in window5 rejections in 1 minute
Response TimeoutOrder Service timeout50ms per request

When the kill switch is triggered:

  1. All open MM orders are canceled immediately
  2. Symbol state set to HALTED
  3. Event emitted to Kafka: mm.risk.triggered.v1
  4. Worker stops processing new market data
  5. Manual intervention required to reset via /v1/mm/reset endpoint

The service exposes a comprehensive REST API for configuration and control. See the Market Maker API Reference for complete documentation.

  • GET /v1/mm-configs - List all configurations
  • POST /v1/mm-configs - Create new configuration
  • GET /v1/mm-configs/{symbol} - Get configuration
  • PUT /v1/mm-configs/{symbol} - Update configuration
  • DELETE /v1/mm-configs/{symbol} - Delete configuration (only if not active)
  • POST /v1/mm-configs/{symbol}/enable - Start market making (PAUSED → ACTIVE)
  • POST /v1/mm-configs/{symbol}/pause - Pause market making (ACTIVE → PAUSED)
  • POST /v1/mm-configs/{symbol}/resume - Resume from pause (PAUSED → ACTIVE)
  • GET /v1/mm/status - Overall system status (all symbols, totals, kill switch state)
  • GET /v1/mm-configs/{symbol}/status - Detailed symbol status (config, position, orders, P&L)
  • POST /v1/mm/emergency/kill - Manually trigger kill switch
  • POST /v1/mm/emergency/flatten - Cancel all orders across all symbols
  • POST /v1/mm/reset - Reset kill switch (requires manual intervention)
  • POSTGRES_URL: PostgreSQL connection URL (required)
  • REDIS_URL: Redis connection URL (required)
  • KAFKA_BROKERS: Kafka broker addresses (required)
  • SCHEMA_REGISTRY_URL: Confluent Schema Registry URL (required)
  • ORDER_SERVICE_URL: Order service HTTP endpoint (default: http://localhost:3000)
  • METADATA_SERVICE_GRPC_URL: Metadata service gRPC endpoint (default: localhost:50051)
  • AUTH_SERVICE_URL: Authentication service URL (required unless DISABLE_AUTH=true)
  • HTTP_PORT: REST API port (default: 8080)
  • METRICS_PORT: Prometheus metrics port (default: 9090)
  • MM_SHADOW_MODE: Compute quotes without placing orders (default: false)
  • QUOTE_UPDATE_INTERVAL: Quote refresh interval (default: 100ms)
  • MAX_ORDERS_PER_SECOND: Rate limiting (default: 10)
  • MARKET_DATA_STALENESS_THRESHOLD: Max data age (default: 5s)
  • KILL_SWITCH_KAFKA_LAG_THRESHOLD: Max consumer lag (default: 10s)
  • KILL_SWITCH_REJECT_STORM_COUNT: Reject count threshold (default: 5)
  • KILL_SWITCH_REJECT_STORM_WINDOW: Time window (default: 1m)
  • KILL_SWITCH_RESPONSE_TIMEOUT_MS: Response timeout (default: 50)
  • SYSTEM_ACCOUNT_ID: Market maker account UUID (default: 00000000-0000-0000-0000-000000000001)
  • SYSTEM_USER_ID: Market maker user UUID (default: 00000000-0000-0000-0000-000000000001)

Stored in the mm_configs table:

FieldTypeDescriptionDefault
symbolVARCHAR(32)Trading symbol (primary key)Required
enabledBOOLEANEnable/disable market makingfalse
base_spread_bpsNUMERICBase spread in basis points10.0
order_size_usdtNUMERICOrder size in USDT100.0
inventory_skew_factorNUMERICInventory bias multiplier0.5
max_position_usdtNUMERICMaximum position size10000.0
max_loss_usdtNUMERICMaximum loss threshold500.0
volatility_multiplierBOOLEANApply volatility adjustmenttrue
num_levelsINTNumber of price levels1
level_spacing_bpsNUMERICSpacing between levels5.0
check_stalenessBOOLEANEnable staleness checksfalse
staleness_threshold_msNUMERICMax data age1000.0
quote_interval_msNUMERICQuote refresh interval100.0
price_threshold_bpsNUMERICRequote threshold5.0

When MM_SHADOW_MODE=true:

  • Quotes are computed normally using real market data
  • NO orders are placed to the Order Service
  • Simulated quotes are emitted to Kafka (mm.quote.simulated.v1)
  • All risk checks are performed as if real
  • Useful for testing strategies, validation, and backtesting
  • Quote Computation: Sub-millisecond latency
  • Order Placement: < 50ms (via Order Service HTTP)
  • Market Data Processing: Real-time with lag monitoring
  • Actor Queue: 1000-message buffer per symbol
  • Rate Limiting: Configurable orders per second per symbol

Exposed at http://localhost:9090/metrics:

  • mm_inventory_usdt{symbol} - Current inventory value
  • mm_open_orders{symbol} - Open order count
  • mm_spread_bps{symbol} - Quoted spread
  • mm_pnl_usdt{symbol,type} - Realized and unrealized P&L
  • mm_kill_switch_total{symbol,reason} - Kill switch trigger count
  • mm_quote_compute_latency_seconds - Quote computation latency histogram
  • mm_order_placement_latency_seconds - Order placement latency histogram
  • mm_kafka_lag_seconds{topic} - Kafka consumer lag
  • mm_state_gauge{symbol,state} - Current state per symbol

OpenTelemetry distributed tracing configured via OPENOBSERVE_OTLP_ENDPOINT.

Structured logging via zap with:

  • Service name
  • Timestamp (ISO8601)
  • Log level (DEBUG, INFO, WARN, ERROR)
  • Symbol identifier
  • Trace ID (for correlation)
  • Stack traces for errors
  • md.orderbook.snap.v1 - Order book snapshots (initial state)
  • md.orderbook.delta.v1 - Order book incremental updates
  • md.trades.v1 - Individual trade ticks
  • md.mark.v1 - Mark price updates (for perpetuals)
  • engine.event.v1 - Trade executions from matching engine
  • md.instrument.created.v1 - New instrument events
  • md.instrument.updated.v1 - Instrument updates (metadata changes)
  • md.instrument.halt.v1 - Trading halt notifications
  • md.instrument.resume.v1 - Trading resume notifications
  • mm.quote.simulated.v1 - Simulated quotes (shadow mode only)
  • mm.risk.triggered.v1 - Risk event notifications (kill switch, limit breaches)
  • mm.state.changed.v1 - State transition events (ACTIVE ↔ PAUSED ↔ HALTED)

All events use Avro serialization with schemas stored in shared/kafka-schema/market-maker-service/.

Type-safe SQL code generation from SQL queries. All database queries are defined in SQL files at internal/repository/sqlc/queries/*.sql and generated to Go code.

Declarative database schema management. Schema is defined in internal/infra/db/schema.sql and migrations are generated using Atlas.

  • mm_configs: Per-symbol configuration
  • mm_states: Current state (ACTIVE/PAUSED/HALTED)
  • mm_positions: Position tracking with P&L
  • mm_orders: Open orders tracking
  • mm_fills: Fill history
  • outbox: Transactional outbox pattern for Kafka events

On service startup:

  1. Loads all MM configs from database
  2. Loads open MM orders from database
  3. Rebuilds inventory from fill history
  4. Sets all symbols to PAUSED state (safety default)
  5. Waits for explicit enable/resume commands via API
  6. Subscribes to Kafka topics and begins consuming market data

Note: The service NEVER auto-starts market making. Manual intervention is required.

  • ❌ Mutate order books directly
  • ❌ Bypass Order Service
  • ❌ Access user wallets or positions
  • ❌ Use privileged matching logic
  • ❌ Place orders unless state = ACTIVE
  • ❌ Auto-restart after kill switch (requires manual reset)
  • ✅ Cancels all orders on ANY uncertainty
  • ✅ Routes orders through Order Service REST API
  • ✅ Tags orders with SYSTEM_MARKET_MAKER account type
  • ✅ Requires manual intervention after kill switch
  • ✅ Starts in PAUSED state on startup
  • ✅ Validates tick size and lot size from metadata