Domain Deep-Dive · v1.0

Autonomous Systems

How the Adaptive Utility Agent framework gives autonomous systems — industrial robots, cobots, drones, AMRs, inspection robots — dynamic weight shifting, principled abstention, independently updatable specialists, and the runtime evidence trail that living safety cases require.

Industrial robot engineers Cobot integrators Drone & AMR teams Safety-case engineers Autonomy architects

1. The autonomous systems spectrum

The self-driving domain doc covers one application — road vehicles — in depth. This document covers the broader class: any autonomous system that makes real-time decisions under uncertainty, subject to hard safety constraints, across a range of platforms that share the same structural problems even though their operating environments differ substantially.

🦾

Industrial robots & cobots

Fixed-arm robots on production lines; collaborative robots (cobots) sharing workspace with humans. Key tension: throughput in free zones, hard safety constraints when humans are present. ISO 10218:2025 governs both.

🤖

Autonomous mobile robots (AMRs)

Warehouse logistics, hospital delivery, factory floor transport. Navigate dynamic environments alongside humans. Must handle unexpected obstacles, changing routes, and safe-stop without damaging payloads.

🚁

Drones — delivery & inspection

Delivery drones balancing speed, energy, and airspace safety. Inspection drones operating in hazardous environments (nuclear, offshore, confined spaces) where sensor degradation and power budgets are hard constraints.

🔬

Surgical & medical robots

Highest-stakes autonomous systems. Confidence minimums are near-absolute. Any AI layer must produce an auditable decision trail, never proceed on low confidence, and support post-procedure review.

🏗️

Inspection & infrastructure robots

Nuclear, subsea, mining, and civil infrastructure inspection. Often operating in environments too hazardous for humans, with no real-time human oversight available. Formal abstention and safe-state behavior are non-negotiable.

📋

Safety-case engineering teams

Engineers who write assurance cases, conduct STPA/FMEA analyses, and maintain living safety arguments. Need runtime evidence — not just design-time proofs — that the system's behavior still satisfies its safety argument after updates.

What all of these share: real-time decisions under uncertainty, hard safety constraints that must not be violated, objectives that conflict across operating contexts, and the same structural problem with static AI — there is no feedback loop between detected errors and system behavior between releases.

2. The structural problem with static AI in autonomous systems

Current approaches to AI in autonomous systems treat the AI component as a static artifact: trained offline, validated at design time, deployed into the field. When the AI makes an error — a perception failure at a specific junction type, a planning bias in narrow corridors, a wrong torque estimate under load — there is no mechanism to correct that behavior between training cycles. The system will make the same error tomorrow, and in every unit in the fleet, until a new model version ships.

For autonomous systems, this failure mode compounds in three ways that make it worse than it is for general AI:

The framework addresses all three: the correction loop reduces repeated errors between releases, the confidence gate handles environmental drift by abstaining rather than proceeding on low confidence, and the utility log + assertions store produce the runtime evidence that living safety cases require.

3. Context-adaptive weight shifting across operating modes

The same autonomous system operates under fundamentally different risk profiles across its operating modes. A cobot moving freely in a cage-free zone has very different safety requirements than the same cobot executing a collaborative task 30cm from a human operator. A delivery drone in clear conditions is operating very differently from the same drone with a battery at 20% capacity and a storm front approaching.

The framework's field-weighted utility function handles this with a single formula — the weights shift with context, producing different behavior without separate rule sets for each scenario:

U = w_s(f) · Safety  +  w_e(f) · Efficacy  +  w_c(f) · Comfort/Throughput

f — operating context (human-present, hazard zone, normal, degraded sensors...)

Decision rule:
  act    if C ≥ C_min(f)  AND  E ≥ E_min(f)
  safe-state / abstain     otherwise

Weight profiles across autonomous system operating contexts

Industrial Robot · Scenario A
Free-operation zone (no humans)
Safety (w_s)
0.40
Efficacy / throughput
0.50
Comfort / smoothness
0.10

Throughput is a significant priority in cage-free zones. Safety constraints still apply but do not dominate at the expense of production rate. Typical of high-speed pick-and-place in human-absent areas.

Industrial Robot · Scenario B
Human-present / collaborative mode (cobot)
Safety (w_s)
0.85
Efficacy / throughput
0.12
Comfort / smoothness
0.03

ISO 10218:2025 collaborative mode. When a human enters the collaborative workspace, safety dominates at 0.85. The same robot that was optimizing throughput now accepts lower speed and wider clearances without a separate control program.

Drone · Scenario C
Standard delivery / inspection (clear conditions)
Safety (w_s)
0.50
Efficiency / speed
0.40
Energy / endurance
0.10

Normal delivery or inspection conditions. Efficiency is a meaningful weight — on-time delivery and battery management matter. From §2.2 of the full whitepaper.

Drone · Scenario D
Elevated hazard (storm, low battery, sensor degradation)
Safety (w_s)
0.80
Efficiency / speed
0.15
Energy / endurance
0.05

Storm front detected, battery below 25%, or sensor uncertainty above threshold. Safety rises to 0.80 — longer route, lower altitude, or abort-and-return selected automatically. The same formula, different weights. From §2.2.

No separate rule sets. The cobot does not need a "collaborative mode program" and a "free-operation program." The drone does not need a "storm rule." The operating context updates the weight vector — detected by the field classifier from sensor inputs and environmental signals — and the utility function produces the appropriate behavior automatically. Adding a new operating context means adding a new weight profile and a classifier signal, not a new decision program.

Sensor degradation and confidence-driven weight tightening

When sensor fusion confidence falls — lidar degraded in dust, camera occluded, GPS denied in a tunnel — the confidence term C falls independently of the weight profile shift. The interaction between falling C and C_min produces the appropriate response: C_min for safety-critical autonomous systems is typically 0.85–0.95, meaning that sensor degradation drives abstention before it drives incorrect action. This is the designed behavior, not a failure mode.

4. Confidence gates, safe states, and formal abstention

Every safety standard governing autonomous systems requires the system to transition to a safe state when it reaches the boundary of its competence. ISO 10218:2025 requires application-specific Performance Level (PL) determination with defined fail-safe behaviors. IEC 61508 requires safe failure fractions and fail-operational / fail-safe mode definitions. The framework's confidence gate produces exactly this behavior formally and testably.

act        if  C ≥ C_min(f)  AND  E ≥ E_min(f)
safe-state if  C < C_min(f)  OR   E < E_min(f)

The gate is a hard decision boundary, not a soft preference. Below C_min, the system does not produce a lower-quality output — it produces no output and initiates the safe-state transition. This is what makes the abstention behavior formally testable: a test suite can verify that for any input that drives C below C_min, the system always enters the safe state.

Platform typeC_minSafe-state behaviorStandard reference
Surgical / medical robot0.95Hold position; alert surgeon; await explicit overrideIEC 62304, ISO 14971
Industrial robot (human-present)0.90Reduce speed to collaborative limit; stop if collision predictedISO 10218:2025 PL e
Inspection robot (hazardous zone)0.90Halt; return to safe zone; alert remote operatorIEC 61508 SIL 2/3
Delivery drone (elevated hazard)0.85Abort delivery; return to base; do not attempt routeFAA Part 107, EASA UAS
AMR (warehouse, humans present)0.80Reduce speed; sound alert; stop at safe distanceISO 3691-4, ANSI B56.5
Industrial robot (free zone)0.70Stop cycle; request human review before resumingISO 10218:2025 PL d

Illustrative thresholds. Must be validated against the specific ODD and risk assessment for each application. See §5.1 of the full whitepaper for the C_min derivation methodology from professional licensing standards.

Why this matters for functional safety certification. ISO 10218:2025's shift to explicit, application-specific Performance Level determination means that "the robot stops when it should" must be provable, not assumed. The confidence gate is a directly testable safety function: given any test case that drives measured confidence below C_min, the system must produce a safe-state transition. This maps cleanly to the functional safety requirement structure — it is a defined input-output safety function with a verifiable boundary condition.

5. Specialist decomposition for the autonomy stack

Autonomous systems already reason in terms of functional decomposition — perception, planning, and control are distinct pipelines in every production autonomy stack. The Micro-Expert model formalizes this at the model-graph level: each functional domain becomes an independently deployable specialist with its own weights, calibration cycle, confidence signal, and blue-green deployment lifecycle.

👁️

Perception specialist

Object detection, tracking, pose estimation, environment mapping. Produces confidence-annotated scene representations. Most update-sensitive — edge cases from field encounters improve this specialist first. Validation via labeled sensor recordings.

🗺️

Motion planning specialist

Trajectory generation, path selection, collision-free motion. Updated independently when planning improvements don't require perception retraining. Confidence signal for trajectory feasibility drives abstention before a manoeuvre executes.

⚠️

Safety constraint specialist

Encodes hard safety rules: speed limits in human-present zones, force thresholds for collaborative contact, exclusion zones, emergency stop conditions. Narrowest update scope — changing a force threshold updates only this specialist.

📋

Domain rules specialist

Application-specific policy: warehouse routing rules, nuclear inspection protocols, airspace regulations. Most updateable when entering new geographies or application domains. ISO 10218:2025 collaborative mode parameters live here.

⚖️

Arbiter Agent

Resolves contradictions between specialists — e.g., planning specialist generates a trajectory that the safety constraint specialist flags. Applies conservation-first policy under unresolved contradiction. Logs all resolution events for post-hoc review.

The inter-specialist interface is a structured protocol that all specialists share:

Request:  { query, context, field, confidence_floor, session_id }
Response: { answer, confidence, assertions[], uncertainty_flags, U_score }

Contradiction between specialists — planning specialist generates a proceed trajectory while the safety constraint specialist flags a threshold violation — is the key case the Arbiter is designed for. The Arbiter's conservation-first policy means: under any unresolved contradiction involving the safety constraint specialist, the most conservative safe output wins. This is not heuristic; it is the defined arbitration policy enforced by the confidence threshold gate on the Arbiter's own output.

6. The dynamic safety case argument

This is the section of this document that has no equivalent in the self-driving domain doc, because it is most relevant to safety-case engineering teams rather than stack engineers. It addresses a problem that the autonomous systems safety research community has identified as critical: static safety cases become obsolete as systems evolve.

The problem with static safety cases

A traditional safety case is a structured argument — built at design time — that a system meets its safety requirements given the evidence available at that point. For autonomous systems, this is increasingly untenable: the system's AI components learn, its operating environment changes, its software is updated. A safety argument that was valid at initial certification may not be valid after a software update, a new deployment geography, or 10,000 hours of operational learning.

The current research frontier in autonomous systems safety (2024–2025) is the concept of dynamic safety cases — living assurance documents that co-evolve with the operational system, drawing on runtime evidence rather than relying solely on design-time proofs. Regulatory bodies across sectors (CAA, HSE, ONR, RSSB) are converging on the requirement that autonomous systems produce runtime evidence of continued safety compliance, not just a point-in-time certification.

What the framework contributes to the dynamic safety case

The framework's architecture produces exactly the runtime evidence that dynamic safety cases require, as a byproduct of its normal operation:

Framework outputs mapped to dynamic safety case evidence requirements
Utility log (U, E, C per decision)
Continuous
Contradiction events (type, severity, resolution)
Per-event
Abstention events (C < C_min, trigger, context)
Per-event
Assertions store (verified corrections with timestamps)
Cumulative
Blue-green deployment log (trigger, T window, outcome)
Per update
Arbiter evidence chains (internal, per contradiction)
Per verdict

Each row is a structured, typed log record produced by the framework's normal operation. Together they constitute runtime evidence that the safety argument's key claims — "the system abstains when uncertain," "errors are detected and corrected," "updates are validated before deployment" — hold in production, not just in the design-time test suite.

The three safety argument claims the framework directly supports

A safety case for an autonomous system typically needs to argue three things about its AI component. The framework supports all three with runtime evidence:

  1. "The system does not proceed when its competence is insufficient." — Supported by the confidence gate producing a logged abstention event every time C falls below C_min. The log records the field, the confidence value at the time of abstention, and the sensor context. Safety reviewers can query this log to verify the gate is functioning and measure its activation rate over the operational period.
  2. "Errors are detected and the system does not repeat them." — Supported by the contradiction detection and assertions store pipeline. When an error is detected, a structured correction is stored with a timestamp and confidence level. The correction is injected into future sessions immediately and feeds the calibration pipeline for weight-level correction. The log records both the detection event and the resolution event, providing evidence that the system's correction mechanism is functioning.
  3. "Updates to the AI component are validated before deployment and do not regress safety properties." — Supported by the blue-green deployment protocol. Every specialist update has a logged trigger (which utility deviation, over which statistical window), a logged promotion trajectory (traffic split over time), a logged benchmark gate result, and a clear rollback path. For safety-critical fields, the T ≥ 246 interaction window ensures the update has demonstrated sustained improvement before promotion. This is directly auditable by a safety reviewer.

What this means for the safety case engineer. The framework does not replace the safety case — it feeds it. The structured logs it produces are the runtime evidence that transforms a static design-time argument into a living document. The safety case engineer's job shifts from "prove at design time that the system will behave safely" to "define what evidence the runtime system must produce to demonstrate continued safety compliance, and configure the framework to produce it." That is a tractable engineering problem rather than an intractable certification bottleneck.

7. Standards mapping: ISO 10218:2025, IEC 61508, ISO 13482

The 2025 revision of ISO 10218 (the first major update since 2011) makes functional safety requirements explicit rather than implied, and integrates the former ISO/TS 15066 for collaborative robot applications. The key shift: instead of a blanket Performance Level d (PLd) requirement for all safety functions, ISO 10218:2025 requires application-specific PL determination based on actual risk parameters. This creates a direct mapping opportunity to the framework's field-specific C_min values.

StandardKey requirementFramework property that addresses it
ISO 10218:2025
Industrial & cobot safety
Application-specific Performance Level (PL) with defined fail-safe behavior per safety function; explicit functional safety requirements; cybersecurity scope Confidence gate is a testable safety function with defined PL-equivalent behavior (abstain at C < C_min). Field-specific C_min maps to application-specific PL. Abstention log is the functional safety evidence.
IEC 61508
Functional safety (general)
Safety Integrity Level (SIL) determination; runtime safety monitoring; fail-safe / fail-operational mode; systematic capability Utility score as runtime safety monitor (systematic deviation from baseline triggers response). Contradiction detector catches systematic failures. Assertions store prevents repeated failures — the SIL architecture requires this.
ISO 13482
Service robots
Safety validation for robots operating in proximity to people outside industrial settings; risk assessment for novel applications Shadow-mode validation with T ≥ 246 window before any operational role. Blue-green protocol produces the validation evidence base. Confidence gate handles proximity-to-person risk dynamically.
ISO 26262
Automotive / AV
ASIL decomposition; component-level validation scope on update; software update validation Each specialist has independent weights and update cycle. Updating the domain rules specialist does not force revalidation of perception. ASIL-equivalent decomposition is natural in the specialist architecture.
EU AI Act (2024)
High-risk AI systems
Audit trail, explainability, human oversight capability, post-market monitoring Utility log provides the audit trail. Arbiter evidence chain provides the explanation for each contested decision. Confidence gate enables human oversight by triggering escalation. Post-market monitoring = ongoing utility trend tracking.

This mapping is not a compliance claim — it identifies where the framework's properties align with standard requirements. Full compliance requires a complete safety case with appropriate evidence, risk assessments, and validation testing. The framework provides the runtime evidence layer, not the complete certification package.

What the 2025 ISO 10218 update means practically. The old standard's blanket PLd requirement is replaced by application-specific PL determination. This means a cobot in a low-risk application (slow-speed, force-limited, no sharp tooling) may only need PLc, while one in a higher-risk application needs PLe. The framework's field-specific C_min values and penalty multipliers are designed to carry exactly this differentiation — the configuration table maps to the application-specific PL tier, and the confidence gate implements the required fail-safe behavior at that tier's threshold.

8. Edge hardware: Jetson-class specialists and pipeline parallelism

For autonomous systems deployed on physical platforms, the hardware argument is not about cost — it is about physical feasibility. A monolithic frontier model consuming 700W cannot run on a robot, a drone, or a mobile inspection platform. The Micro-Expert model running on Jetson-class hardware is the only path to AI-quality reasoning on battery-constrained edge platforms.

H100 SXM5
700 W
Cannot run on any mobile platform
RTX 4090
450 W
Drone power budget: 200–500W total
3× Jetson Orin NX
~75 W
3 specialists: perception + planning + rules
Jetson AGX Orin
15–60 W
32 GB unified memory, ~$900
HardwareVRAMTDPCost (2025)Deployment context
Jetson AGX Orin32 GB unified15–60 W~$900Industrial robot, AMR, inspection platform hub
Jetson Orin NX16 GB unified10–25 W~$500Drone, small mobile platform, specialist node
Jetson Orin Nano8 GB unified7–15 W~$250Lightweight specialist, sensor fusion assistant
H100 SXM580 GB700 W~$30,000–35,000Datacenter only — physically impossible on any mobile platform

Hardware specs from NVIDIA (2025). See §10.9.6 of the full whitepaper for the full edge deployment analysis and worked power budget examples.

Pipeline parallelism: specialists run concurrently, not sequentially

In the Micro-Expert deployment on a physical platform, specialists do not run sequentially — they run in a pipeline. This is the same architecture that production autonomy stacks (Tesla FSD, Waymo) use for their neural network pipelines. The Micro-Expert model applies it at the model-graph level:

Sequential assumption (naive):
    Perception  →  Planning  →  Safety constraints  →  Decision
    [50ms]          [40ms]           [15ms]              [5ms]   Total: ~110ms

Actual pipeline architecture (parallel):
    Perception        [50ms] ─────────────────────────────┐
    Planning      ←───────────── [40ms from perception] ──┤→ Arbiter → Output
    Safety/Rules      [15ms running in parallel] ──────────┘
    Total: ~55ms + arbitration overhead ≈ 60–65ms

For a cobot running a tight control loop (typically 1kHz for low-level control, 10–100Hz for the AI decision layer), a 65ms AI decision latency — running on embedded Jetson hardware well within the platform's power envelope — is feasible where a 700W datacenter GPU is not.

9. Independent updateability and narrower certification scope

Every update to an autonomous system's AI component is a certification event in safety-critical domains. If the entire AI is a monolithic model, every update — even updating a traffic rule for a new city, adjusting a force threshold for a new material, improving object detection for a new object class — forces revalidation of the entire system. This is the primary reason AI update cadences in certified autonomous systems are so slow.

Revalidation scope: monolithic vs. specialist decomposition — illustrative
Monolith: update domain rules (new site)
100%
Specialist: update domain rules (new site)
~15%
Monolith: improve perception (new object class)
100%
Specialist: improve perception (new object class)
~35%
Monolith: adjust safety thresholds (new material)
100%
Specialist: adjust safety thresholds (new material)
~12%

Illustrative. Actual scope depends on interface stability. The domain rules and safety constraint specialists have the narrowest interfaces and the most contained update surfaces — they are the highest-value starting points for adopting the specialist architecture.

The principle that enables this is identical to what ASIL decomposition achieves in ISO 26262: safety goals are allocated to components, and component-level validation can satisfy the overall safety case if the decomposition is sound and the interfaces between components are stable. The specialist architecture is not a replacement for ASIL decomposition — it is an AI architecture that is compatible with that reasoning pattern in a way that a monolithic model is not.

The blue-green deployment trigger for safety-critical autonomous systems

Trigger a deployment cycle when ALL of:
    |U_current − U_baseline| > δ(field)     [significant deviation]
    deviation sustained for ≥ T interactions  [not transient noise]
    held-out scenario library available        [can evaluate candidate]

For safety-critical fields (surgery, aviation, high-stakes robotics):
    δ ≈ 0.005–0.010   T ≥ 246 interactions

T = 246 comes from a power analysis on observed utility variance (σ ≈ 0.04)
with α = 0.05. High-stakes fields demand a large confirmation window
before any traffic shifts.

For a fleet of 50 cobots in a manufacturing plant each running 8 hours/day, 246 interactions per specialist can accumulate in hours to days — meaning corrections to the safety constraint specialist can be validated and deployed on the order of a workweek, not a quarter. That is the practical value of the combination of statistical rigor (T derived from power analysis) and independent specialist deployment (T applies only to the affected specialist, not the whole system).

10. Fleet-level correction propagation

A fleet of 100 AMRs in a warehouse, 50 cobots in a factory, or 30 inspection drones across multiple sites constitutes a collective learning resource that no single unit exploits in a static AI architecture. An edge case encountered by one unit — a new pallet configuration that causes a perception failure, a cobot operating mode the planning specialist had not seen, a tunnel with unusual acoustic reflections — takes months to propagate a correction across the fleet under a traditional retrain-and-redeploy cycle.

The framework compresses this cycle for scoped corrections through the assertions store and DPO calibration pipeline:

1
Edge case encountered and logged The specialist on unit X flags a low-confidence output (C < C_min) or the Arbiter detects a contradiction. The full decision context — sensor state, confidence values, specialist outputs, resolution outcome — is logged as a structured event.
2
Human review generates a verified correction The escalation packet is reviewed. The reviewed correct behavior is stored in the assertions store as a typed, confidence-annotated assertion with the scenario context. Decay class is assigned based on the nature of the correction (algorithm-level = Class A, site-specific rule = Class C or D).
3
Correction routed to the relevant specialist only A new object class → perception specialist only. A site-specific routing rule → domain rules specialist only. Planning specialist and safety constraint specialist are unaffected. The correction's scope is bounded by the specialist architecture.
4
DPO calibration cycle updates the specialist The correction becomes a DPO training pair weighted by the field's penalty multiplier. The next calibration run incorporates it. The replay buffer ensures prior corrections are not erased.
5
Blue-green validation across the fleet The updated specialist is promoted canary-first across fleet units. Statistical window T must be satisfied before full promotion. The log records which units have the updated specialist and which are still on the prior version — mixed-fleet operation is safe because each unit's blue-green state is independent.
6
Immediate correction via assertions injection Before the calibration cycle completes, every unit that queries the assertions store for this scenario class receives the correction as a session-level injection. The model weights are not yet updated, but the behavioral correction is immediate and fleet-wide.

11. What the framework does not replace

This section is included because the industrial robotics and safety-case audience will ask this question directly, and the honest answer matters.

The framework sits above the hardware safety layer, not below it. It is an AI decision-orchestration and continual learning layer. It does not replace — and is not designed to replace — the following:

The intended architecture is layered: hardware safety (interlocks, drives, physical barriers) → PLC safety logic → framework decision layer (confidence gate, abstention, correction loop) → operator / remote assist escalation. The framework is the AI decision layer in that stack, with well-defined interfaces to the layers above and below it.

12. MVP shape: shadow mode before any live control

The right entry point for any autonomous system team is shadow mode — the framework runs alongside the production system, logs utility values and contradiction events, and builds an evidence base without influencing any physical actuator. This is the same validation logic as AV shadow mode testing, applied to the broader autonomous systems context.

1
Deploy in shadow mode — zero influence on actuators The framework receives the same sensor and decision inputs as the production system. It generates utility values, confidence scores, and contradiction events silently. No output influences any physical actuator. Duration: enough operating cycles to cover the full scenario space of the ODD.
2
Identify the highest-value, most-validatable specialist domain Review shadow logs to find which scenario classes produce the most frequent low-confidence outputs or contradiction events. For industrial robots, this is often the safety constraint specialist (force thresholds, speed limits). For AMRs, often the domain rules specialist (routing policies). This is the highest-value starting point.
3
Validate the Arbiter policy against labeled scenario recordings Run the Arbiter against a library of labeled scenario recordings where the correct outcome is known. Tune the arbitration policy before any live role. Measure the rate at which the Arbiter selects the correct outcome and the rate at which it correctly escalates when the correct outcome is not available.
4
Set conservative C_min — expect high shadow escalation rates early Start with C_min at or above the platform's worst-case safety requirement (e.g., 0.92 for human-present industrial robot). Log the shadow escalation rate. This is the calibration baseline: how often would the framework have escalated under real operating conditions?
5
Run first calibration cycle on shadow-collected DPO pairs Use the shadow logs to build (preferred, rejected) pairs for the chosen specialist. Run the calibration cycle. Measure whether confidence calibration improves on the held-out scenario library. This is the validation that the correction loop functions on real data from the target platform.
6
Promote to advisory role first — no actuator authority The framework's recommendations are surfaced to the human operator as advisory signals (e.g., on a monitoring dashboard) before any authority over actuators is granted. Validate that the recommendations are coherent and that the confidence values are well-calibrated against actual outcomes.
7
Promote to limited actuator authority only after blue-green validation The shadow specialist (BLUE) runs alongside the new candidate (GREEN) until GREEN's utility profile matches or exceeds BLUE across all decision classes above the safety threshold, over the T ≥ 246 interaction window. Only after this window closes, and only for the lowest-risk decision class, does any actuator authority begin.

Build it with AUA v1.0

Configure this domain today

AUA v1.0 handles routing, utility scoring, abstention policy, and audit logging. Safety-critical preset with c_min=0.95 halts automated action when confidence is insufficient. Physical actuation integration is via the AUA REST API.

Integration boundary: AUA handles the arbitration, routing, correction, and audit layer. Physical integration (vehicle control, SCADA, robot actuators, …) connects to AUA via the REST API — AUA does not control hardware directly.

1. Install

pip install adaptive-utility-agent

2. Scaffold for this domain

aua init my-autonomous-systems-agent --preset medical-safe --tier macbook
cd my-autonomous-systems-agent
aua doctor

3. Key config for this domain

# aua_config.yaml
specialists:
  - name: aviation
    model: qwen-coder-7b-awq
    port: 11434
    field: aviation

safety:
  abstention_enabled: true
  require_arbiter_for_high_risk: true
  min_confidence_for_direct_answer: 0.95

security:
  encryption: {enabled: true, key_secret: AUA_ENCRYPTION_KEY}

audit:
  enabled: true
  hash_chain: true

Generate your encryption key: python3 -c "import os; print(os.urandom(32).hex())" or openssl rand -hex 32 — 64-char hex string. See Tutorial §12.4 for key management.

4. Start and query

aua serve

curl -X POST http://localhost:8000/query \
  -H "Authorization: Bearer $AUA_TOKEN" \
  -H "Content-Type: application/json" \
  -d '{"prompt": "...", "session_id": "demo"}'

5. What AUA handles vs. what you bring

AUA v1.0 providesYou bring
Multi-specialist routing + utility scoringDomain-specific specialist models
Arbiter + contradiction detectionDomain-specific quality criteria
Correction loop + DPO pair exportFine-tuning infrastructure (TRL, Axolotl, …)
Blue-green deployment + rollbackEvaluation datasets for your domain
Append-only audit log with hash chainRobot control layer (ROS2, PLC, …)
Prometheus + Grafana + OTELYour monitoring infrastructure

Full instructions: AUA Tutorial · Framework v1.0 · GitHub ↗