Skip to main content

Documentation Index

Fetch the complete documentation index at: https://noradocs.solomontsao.com/llms.txt

Use this file to discover all available pages before exploring further.

Nora supports three provisioner backends that determine how agent runtime environments are created and isolated. You select a backend by setting PROVISIONER_BACKEND in your .env file. Docker is the default and requires no additional configuration, making it the fastest path to a running deployment. Proxmox and Kubernetes unlock stronger isolation and horizontal scale for production environments.

Docker

Single-host deployments, local development, and quick evaluation. Zero extra configuration required.

Proxmox

Stronger VM-level isolation, private fleet management, and on-premises infrastructure control.

Kubernetes

Cloud-native deployments, horizontal scaling, and container-native agent workloads.

Comparing backends

DockerProxmoxKubernetes
Deployment scaleSingle hostPrivate VM fleetMulti-node cluster
Agent isolationContainerFull VMContainer (namespace-scoped)
Setup complexityLow — no extra vars requiredMedium — Proxmox API credentials and a VM templateMedium to high — existing cluster required
Best forLocal dev, evaluation, lean self-hostedOn-prem, security-conscious operators, private infrastructureCloud-native teams, AWS/Azure/GCP rollouts
Horizontal scalingNoLimited (manual fleet)Yes

Docker

Docker is the default backend. When PROVISIONER_BACKEND is unset or set to docker, Nora provisions each agent runtime as an isolated Docker container on the same host running the Nora stack.
PROVISIONER_BACKEND=docker
No additional variables are required. Resource limits applied to each agent are controlled by the self-hosted resource limit variables (MAX_VCPU, MAX_RAM_MB, MAX_DISK_GB, MAX_AGENTS).
Docker is the recommended backend for first-time deployments, local development, and low-volume self-hosted installations. You can migrate to Proxmox or Kubernetes later without changing your data.

Proxmox

The Proxmox backend provisions each agent runtime as a full virtual machine on a Proxmox VE node. This gives you stronger isolation than containers and full control over the underlying VM configuration. Set the following variables in your .env:
PROVISIONER_BACKEND=proxmox

PROXMOX_API_URL=https://proxmox.local:8006/api2/json
PROXMOX_TOKEN_ID=user@pam!tokenname
PROXMOX_TOKEN_SECRET=<your-token-secret>
PROXMOX_NODE=pve
PROXMOX_TEMPLATE=ubuntu-22.04-standard
VariableDescription
PROXMOX_API_URLFull URL to your Proxmox API endpoint.
PROXMOX_TOKEN_IDAPI token identifier in user@pam!tokenname format.
PROXMOX_TOKEN_SECRETSecret for the API token above.
PROXMOX_NODEName of the Proxmox node where VMs are created. Defaults to pve.
PROXMOX_TEMPLATEVM template cloned as the base image for each new agent. Defaults to ubuntu-22.04-standard.
Create an API token in the Proxmox web UI under Datacenter → Permissions → API Tokens. The token user needs sufficient privileges to clone VMs from the configured template and manage VM lifecycle on the target node.

When to choose Proxmox

Choose Proxmox when you need:
  • VM-level isolation between agent workloads
  • An existing Proxmox cluster you want to leverage
  • On-premises infrastructure where you control the hardware
  • Stronger security boundaries than container namespaces provide

Kubernetes

The Kubernetes backend schedules agent workloads as pods inside a Kubernetes cluster. It is best suited for cloud-native operators already running Kubernetes on AWS, Azure, GCP, or on-premises. Set the following variables in your .env:
PROVISIONER_BACKEND=k8s
K8S_NAMESPACE=openclaw-agents
K8S_EXPOSURE_MODE=cluster-ip
VariableRequiredDefaultDescription
K8S_NAMESPACENoopenclaw-agentsNamespace where agent pods and services are created.
K8S_EXPOSURE_MODENocluster-ipHow agent services are exposed. Use cluster-ip for standard in-cluster access and node-port for local kind-based verification.
K8S_RUNTIME_NODE_PORTNoNode port for the agent runtime service. Only applicable in node-port mode.
K8S_GATEWAY_NODE_PORTNoNode port for the agent gateway service. Only applicable in node-port mode.
K8S_RUNTIME_HOSTNoHostname or IP that the Nora control plane uses to reach agent runtimes in node-port mode.

Exposure modes

Agent services use ClusterIP and are reachable only from within the cluster. Use this for all standard cloud and on-premises Kubernetes deployments where the Nora control plane runs inside the same cluster.
K8S_EXPOSURE_MODE=cluster-ip

When to choose Kubernetes

Choose Kubernetes when you need:
  • Horizontal scaling of agent workloads across multiple nodes
  • Native integration with cloud provider managed services (EKS, AKE, GKE)
  • Container-native deployment on infrastructure already managed by Kubernetes
  • Standard Kubernetes tooling for observability, autoscaling, and policy enforcement