Skip to content

Data Flow

This document describes the data flows for key operations in the TradeX platform.

sequenceDiagram participant Client participant Backend participant OrderService participant WalletService participant MetadataService participant MatchingEngine participant MarketData participant Kafka Client->>Backend: POST /orders Backend->>OrderService: Create Order OrderService->>MetadataService: Validate Instrument (gRPC) OrderService->>WalletService: Check Balance (gRPC) OrderService->>WalletService: Hold Balance (gRPC) OrderService->>Kafka: Publish Order Event Kafka->>MatchingEngine: Order Event MatchingEngine->>MatchingEngine: Match Orders MatchingEngine->>Kafka: Publish Trade Event Kafka->>OrderService: Trade Event OrderService->>WalletService: Release/Update Hold (gRPC) Kafka->>MarketData: Trade Event MarketData->>MarketData: Update Order Book MarketData->>Client: WebSocket Trade Update
  1. Client submits order via Backend Service
  2. Order Service validates order against instrument rules (Metadata Service)
  3. Balance check performed via Wallet Service gRPC
  4. Balance held for order execution
  5. Order published to Kafka
  6. Matching Engine consumes order and matches
  7. Trade executed and published to Kafka
  8. Order Service updates order status and releases/updates holds
  9. Market Data Service updates order book and broadcasts via WebSocket
sequenceDiagram participant MatchingEngine participant Kafka participant MarketData participant Redis participant PostgreSQL participant Client MatchingEngine->>Kafka: Trade Event MatchingEngine->>Kafka: Order Book Delta Kafka->>MarketData: Consume Events MarketData->>MarketData: Process Trade MarketData->>MarketData: Update Order Book MarketData->>Redis: Cache Order Book MarketData->>PostgreSQL: Persist Trade MarketData->>Client: WebSocket Update MarketData->>Kafka: Publish Normalized Trade
  1. Matching Engine publishes trade and order book events to Kafka
  2. Market Data Service consumes events
  3. Order book updated in memory and cached in Redis
  4. Trades persisted to PostgreSQL (TimescaleDB)
  5. Updates broadcast to WebSocket subscribers
  6. Normalized trades published to Kafka for downstream consumers
sequenceDiagram participant Client participant Backend participant AuthService participant PostgreSQL participant Kafka Client->>Backend: POST /auth/login Backend->>AuthService: Authenticate User AuthService->>PostgreSQL: Validate Credentials AuthService->>AuthService: Generate JWT Tokens AuthService->>PostgreSQL: Store Refresh Token AuthService->>Kafka: Publish Login Event AuthService->>Backend: Return Tokens Backend->>Client: Return Tokens
  1. Client submits credentials
  2. Auth Service validates credentials against database
  3. JWT tokens generated (access and refresh)
  4. Refresh token stored in database
  5. Login event published to Kafka
  6. Tokens returned to client
sequenceDiagram participant Client participant Backend participant WalletService participant PostgreSQL participant Kafka participant External Client->>Backend: POST /wallet/deposit Backend->>WalletService: Create Deposit WalletService->>PostgreSQL: Record Deposit WalletService->>External: Generate Deposit Address External->>External: Confirm Deposit External->>WalletService: Webhook/Callback WalletService->>PostgreSQL: Credit Balance WalletService->>Kafka: Publish Deposit Confirmed WalletService->>Backend: Return Status Backend->>Client: Return Status
  1. Client initiates deposit request
  2. Wallet Service creates deposit record
  3. Deposit address generated (for crypto)
  4. External system confirms deposit
  5. Balance credited to user account
  6. Deposit confirmed event published to Kafka
  7. Status returned to client
sequenceDiagram participant Admin participant MetadataService participant PostgreSQL participant Outbox participant Kafka participant Consumer Admin->>MetadataService: POST /instruments (Maker) MetadataService->>PostgreSQL: Save Instrument (Pending) MetadataService->>PostgreSQL: Write to Outbox Admin->>MetadataService: POST /instruments/:symbol/approve (Checker) MetadataService->>PostgreSQL: Update Status (Active) MetadataService->>PostgreSQL: Write to Outbox MetadataService->>Outbox: Process Outbox Outbox->>Kafka: Publish Instrument Created Kafka->>Consumer: Instrument Event
  1. Admin creates instrument (maker mode, status: pending)
  2. Instrument saved to database
  3. Event written to outbox table
  4. Admin approves instrument (checker mode, status: active)
  5. Status updated in database
  6. Event written to outbox
  7. Outbox worker processes events
  8. Events published to Kafka
  9. Downstream consumers receive events
sequenceDiagram participant ExternalExchanges participant MarketData participant PostgreSQL participant Redis participant Client loop Every Interval MarketData->>ExternalExchanges: Fetch Prices (ccxt) ExternalExchanges->>MarketData: Return Prices MarketData->>MarketData: Calculate Weighted Average MarketData->>PostgreSQL: Store Index Price MarketData->>Redis: Cache Index Price MarketData->>Client: WebSocket Update end
  1. Index Price Worker fetches prices from external exchanges
  2. Prices weighted by configured weights
  3. Index price calculated as weighted average
  4. Price stored in PostgreSQL and cached in Redis
  5. Update broadcast to WebSocket subscribers