Architecture
Turqoa follows a seven-layer architecture that separates concerns from physical sensor input through to operator interaction and compliance recording. Each layer communicates through well-defined interfaces, enabling independent scaling, testing, and replacement of components.
The Seven Layers
| Layer | Name | Responsibility |
|---|---|---|
| 1 | Sensors | Physical devices — cameras, RFID readers, LiDAR, GPS, barriers |
| 2 | AI Perception | Computer vision models — OCR, damage detection, object classification |
| 3 | Validation | Data enrichment and cross-referencing against external systems (TOS, customs) |
| 4 | Decision Engine | Rule evaluation, confidence aggregation, and decision production |
| 5 | Orchestration | Workflow coordination, barrier control, notification routing |
| 6 | Operator Command | Dashboards, manual review queues, override interfaces |
| 7 | Audit | Immutable logging, evidence packaging, compliance reporting |
Layer Descriptions
Layer 1 — Sensors
The sensor layer abstracts physical hardware behind a unified ingestion API. Turqoa supports:
- IP cameras (ONVIF-compliant) for OCR and damage capture
- RFID readers for chassis and container tag identification
- Barrier controllers for gate arm actuation
- GPS/AIS receivers for vessel and vehicle positioning
- Environmental sensors for lighting and weather compensation
# Example: Camera configuration
sensors:
- id: gate-01-front
type: ip_camera
protocol: onvif
endpoint: rtsp://192.168.1.100:554/stream1
resolution: 2560x1920
fps: 15
role: container_front
zone: gate-01
Layer 2 — AI Perception
Perception models run on edge GPU nodes co-located with camera clusters. Each model produces structured output with confidence scores:
- Plate OCR — license plate recognition with regional format support
- Container OCR — ISO 6346 container code and check-digit validation
- Seal OCR — seal number extraction from high-security seals
- Damage Detection — 14-category damage classification on container surfaces
- Object Detection — vehicle type, chassis presence, personnel detection
Layer 3 — Validation
The validation layer enriches perception outputs by cross-referencing external systems:
- Terminal Operating System (TOS) for booking and appointment verification
- Customs single-window for clearance status
- Carrier databases for container ownership
- Watchlists for flagged vehicles or containers
Layer 4 — Decision Engine
The decision engine evaluates a rule set against the enriched transaction data and produces a structured decision. Rules are expressed in a declarative policy DSL:
rules:
- name: auto_approve_known_carrier
when:
- ocr.container.confidence >= 0.95
- validation.tos.appointment == "confirmed"
- validation.customs.clearance == "granted"
- damage.severity == "none"
then:
decision: approve
auto: true
Layer 5 — Orchestration
The orchestration layer translates decisions into physical and digital actions:
- Barrier open/close commands
- Notification dispatch (SMS, email, push)
- TOS transaction updates
- Queue management and lane assignment
Layer 6 — Operator Command
The Command Center provides operators with real-time visibility into all transactions, alerts, and system health. Key capabilities include:
- Live transaction feed with AI-annotated images
- Manual review queue with side-by-side evidence panels
- Override controls with mandatory justification fields
- System health dashboards and camera status monitoring
Layer 7 — Audit
Every event flowing through layers 1–6 is captured in the audit layer. Records are stored in an append-only ledger with cryptographic chaining to detect tampering.
Data Flow
A typical gate transaction flows through the architecture as follows:
- A truck arrives at the gate and triggers the sensor layer (camera capture, RFID read)
- The perception layer processes images and produces OCR reads and damage assessments
- The validation layer checks reads against TOS appointments and customs clearance
- The decision engine evaluates rules and produces an approve/review/deny decision
- The orchestration layer actuates the barrier and updates the TOS
- If the decision requires review, the operator command layer presents it to an operator
- The audit layer records every step with timestamps and evidence artifacts
Deployment Topology
Turqoa supports three deployment models:
| Model | Description | Use Case |
|---|---|---|
| Edge-Only | All processing on local hardware at the terminal | Air-gapped environments, low-latency requirements |
| Hybrid | Edge perception with cloud decision engine and audit | Standard deployment for most terminals |
| Cloud | Full cloud deployment with camera streams via secure tunnel | Remote or low-volume facilities |
Note: Edge nodes require NVIDIA GPU hardware (T4 minimum) for real-time perception model inference. See the Quickstart for detailed hardware requirements.