Canonical public overview of how Nora’s control plane is wired together — frontends, API, queues, workers, backend adapters, and the runtime contract package.
Nora is the self-hosted AI agent ops platform — an operator-facing control plane for OpenClaw, Hermes, and supported sandboxed runtimes. Three browser surfaces sit behind one nginx ingress, platform state lives in PostgreSQL, background work is coordinated through Redis and BullMQ, and runtimes are provisioned (or proxied) through backend adapters that share the runtime contract package in agent-runtime/.The public repo centers on a single control-plane host. Agent workloads can stay on the local Docker host or be placed onto supported external execution targets — Kubernetes (k3s/k8s), Proxmox, NemoClaw — without changing the operator workflow.
WebSocket upgrades are handled at the nginx layer; the Express app attaches WS handlers via attachGatewayWS. SSE chat endpoints (/api/agents/*/gateway/chat) have chunked transfer encoding disabled for real-time streaming. OAuth callbacks land at the marketing app before redirecting into backend-issued HttpOnly session cookies.
Four queues: deployments, clawhub-installs, backups, alert-deliveries. Each has its own retry, backoff, and DLQ retention configured in backend-api/redisQueue.ts.
The API persists desired state first, then hands long-running work to a queue-backed worker. That keeps provisioning failures, retries, and delayed readiness out of the synchronous browser request path.
Every meaningful resource — agents, alert rules, API keys, Agent Hub listings, backups — is scoped to a workspace. A workspace has a creator (workspaces.user_id) plus a row per member in workspace_members with one of four roles:
Role
Can read
Can edit
Can manage members
Can delete workspace
viewer
✅
❌
❌
❌
editor
✅
✅
❌
❌
admin
✅
✅
✅
❌
owner
✅
✅
✅
✅
Permission checks read from workspace_members.role, never from workspaces.user_id directly. Invitations land in workspace_invitations with a hashed token signed by NORA_WORKSPACE_INVITE_SECRET (falls back to JWT_SECRET).
Each workspace can mint scoped API keys (/app/workspaces/:id/api-keys). Tokens are bearer-only, prefixed nora_, hashed at rest with HMAC-SHA256, and carry a fixed scope set: agents:read, agents:write, workspaces:read, monitoring:read, integrations:read, integrations:write. Workspace mutation, member management, and key issuance stay on session auth — an API key cannot mint another key.
The current public import contract is intentionally scoped:
openclaw: agent files, workspace content, session memory, and provider material Nora can extract from supported source files
hermes: workspace content, model config, supported Hermes channel config, and provider environment material
Both families: supported Nora-managed state such as imported provider records, channel/integration wiring where available, and per-agent secret overrides
Unsupported runtime-specific state is surfaced as draft warnings instead of being silently invented or applied.
Nora chooses a concrete backend through three layers of intent:
Layer
Current values
Meaning
Runtime family
openclaw, hermes
Which operator contract the runtime satisfies.
Deploy target
docker, k8s, proxmox
Where the runtime should be scheduled. k3s is accepted as a user-facing id and normalizes to k8s internally.
Sandbox profile
standard, nemoclaw
Which isolation profile should wrap the runtime.
The worker resolves the final backend through shared metadata in agent-runtime/lib/backendCatalog.ts. See Provisioner backends for the supported combinations and their maturity.
Compose mounts ./workers/provisioner/backends into backend-api at /app/backends. Adapter code is physically shared between the API and the provisioner worker. When editing adapters, verify both consumers still work.
Provisions or redeploys an agent runtime through the chosen backend adapter.
clawhub-installs
install-skill
1
1 attempt (operator drives retries)
workers/provisioner/worker.ts
Runs clawhub install inside an OpenClaw agent and persists the installed-skill state.
backups
run-backup
BACKUP_WORKER_CONCURRENCY (2)
2 attempts, exponential 5 s base
workers/backup/worker.ts
Captures, encrypts, and uploads an agent backup archive, then prunes expired backups.
alert-deliveries
deliver-webhook
ALERT_DELIVERY_WORKER_CONCURRENCY (5)
ALERT_DELIVERY_ATTEMPTS (5), exponential 1 s base
workers/provisioner/worker.ts
Posts a webhook delivery for an alert rule. Each job is one (rule, channel) pair.
Failed jobs are retained on each queue for inspection. The DLQ surface for deployments is exposed via getDLQJobs / retryDLQJob in backend-api/redisQueue.ts.
Browsers never talk directly to PostgreSQL or Redis. All browser traffic enters through nginx and reaches stateful services through the frontends or backend-api/.
backend-api/ owns auth, persistence, queue orchestration, release metadata, and runtime-facing proxy routes. Frontends do not provision runtimes directly.
backend-api/ also owns migration draft inspection/storage and all runtime file access mediation. Browser users do not receive direct host or container filesystem access.
workers/provisioner/ and workers/backup/ handle long-running infrastructure work outside the request path. They consume queued jobs and write the result back into control-plane state.
agent-runtime/ defines the runtime-side contract used after launch. Control-plane code depends on that contract rather than embedding backend-specific assumptions everywhere.
External execution systems such as Docker, Kubernetes, Proxmox, and NVIDIA secure sandboxes are reached through backend adapters instead of directly from browser surfaces.
Stored secrets — provider keys, integration credentials, OAuth tokens, backup encryption material, SMTP password — are AES-256-GCM encrypted with ENCRYPTION_KEY. API keys and Agent Hub installation keys are HMAC-hashed at rest, not encrypted.
Evaluation, local proof, small self-hosted installs
Public domain, Nora-managed ingress
Nora nginx on public ports
One Docker Compose host
Local Docker or supported external targets
Straightforward public self-hosting
Public domain behind external proxy
Host or upstream proxy terminates and forwards
One Docker Compose host
Local Docker or supported external targets
Existing nginx, Cloudflare, or host-managed TLS setups
External runtime targets
Same ingress as above
One Docker Compose host
Kubernetes (k3s/k8s), Proxmox, NemoClaw sandboxes
Teams that need different runtime placement without changing operator workflow
The clearest public path today is one host running the control plane, with agent runtimes launched locally through Docker by default. Public-domain setups can either let Nora own public ingress directly or put an external reverse proxy in front of Nora’s internal nginx.
The public OSS path is primarily a single-host control plane. The repo does not currently claim a first-class HA or distributed control-plane deployment story.
OpenClaw is the default runtime family. Hermes is a narrower, deployment-first runtime path with a different operator contract.
Migration recreates runtimes under Nora control; it does not adopt a legacy runtime in place.
Hermes is a runtime family, not a backend id. Docker and Proxmox are the current Hermes execution targets, and Hermes import applies only the supported Nora-managed/runtime state described above.
Kubernetes (k3s/k8s), Proxmox, and NemoClaw are execution-target options for agents, not separate control-plane products.