Turqoa Docs

Architecture

Turqoa follows a seven-layer architecture that separates concerns from physical sensor input through to operator interaction and compliance recording. Each layer communicates through well-defined interfaces, enabling independent scaling, testing, and replacement of components.

The Seven Layers

LayerNameResponsibility
1SensorsPhysical devices — cameras, RFID readers, LiDAR, GPS, barriers
2AI PerceptionComputer vision models — OCR, damage detection, object classification
3ValidationData enrichment and cross-referencing against external systems (TOS, customs)
4Decision EngineRule evaluation, confidence aggregation, and decision production
5OrchestrationWorkflow coordination, barrier control, notification routing
6Operator CommandDashboards, manual review queues, override interfaces
7AuditImmutable logging, evidence packaging, compliance reporting

Layer Descriptions

Layer 1 — Sensors

The sensor layer abstracts physical hardware behind a unified ingestion API. Turqoa supports:

  • IP cameras (ONVIF-compliant) for OCR and damage capture
  • RFID readers for chassis and container tag identification
  • Barrier controllers for gate arm actuation
  • GPS/AIS receivers for vessel and vehicle positioning
  • Environmental sensors for lighting and weather compensation
# Example: Camera configuration
sensors:
  - id: gate-01-front
    type: ip_camera
    protocol: onvif
    endpoint: rtsp://192.168.1.100:554/stream1
    resolution: 2560x1920
    fps: 15
    role: container_front
    zone: gate-01

Layer 2 — AI Perception

Perception models run on edge GPU nodes co-located with camera clusters. Each model produces structured output with confidence scores:

  • Plate OCR — license plate recognition with regional format support
  • Container OCR — ISO 6346 container code and check-digit validation
  • Seal OCR — seal number extraction from high-security seals
  • Damage Detection — 14-category damage classification on container surfaces
  • Object Detection — vehicle type, chassis presence, personnel detection

Layer 3 — Validation

The validation layer enriches perception outputs by cross-referencing external systems:

  • Terminal Operating System (TOS) for booking and appointment verification
  • Customs single-window for clearance status
  • Carrier databases for container ownership
  • Watchlists for flagged vehicles or containers

Layer 4 — Decision Engine

The decision engine evaluates a rule set against the enriched transaction data and produces a structured decision. Rules are expressed in a declarative policy DSL:

rules:
  - name: auto_approve_known_carrier
    when:
      - ocr.container.confidence >= 0.95
      - validation.tos.appointment == "confirmed"
      - validation.customs.clearance == "granted"
      - damage.severity == "none"
    then:
      decision: approve
      auto: true

Layer 5 — Orchestration

The orchestration layer translates decisions into physical and digital actions:

  • Barrier open/close commands
  • Notification dispatch (SMS, email, push)
  • TOS transaction updates
  • Queue management and lane assignment

Layer 6 — Operator Command

The Command Center provides operators with real-time visibility into all transactions, alerts, and system health. Key capabilities include:

  • Live transaction feed with AI-annotated images
  • Manual review queue with side-by-side evidence panels
  • Override controls with mandatory justification fields
  • System health dashboards and camera status monitoring

Layer 7 — Audit

Every event flowing through layers 1–6 is captured in the audit layer. Records are stored in an append-only ledger with cryptographic chaining to detect tampering.

Data Flow

A typical gate transaction flows through the architecture as follows:

  1. A truck arrives at the gate and triggers the sensor layer (camera capture, RFID read)
  2. The perception layer processes images and produces OCR reads and damage assessments
  3. The validation layer checks reads against TOS appointments and customs clearance
  4. The decision engine evaluates rules and produces an approve/review/deny decision
  5. The orchestration layer actuates the barrier and updates the TOS
  6. If the decision requires review, the operator command layer presents it to an operator
  7. The audit layer records every step with timestamps and evidence artifacts

Deployment Topology

Turqoa supports three deployment models:

ModelDescriptionUse Case
Edge-OnlyAll processing on local hardware at the terminalAir-gapped environments, low-latency requirements
HybridEdge perception with cloud decision engine and auditStandard deployment for most terminals
CloudFull cloud deployment with camera streams via secure tunnelRemote or low-volume facilities

Note: Edge nodes require NVIDIA GPU hardware (T4 minimum) for real-time perception model inference. See the Quickstart for detailed hardware requirements.