Nora supports three provisioner backends that determine how agent runtime environments are created and isolated. You select a backend by settingDocumentation Index
Fetch the complete documentation index at: https://noradocs.solomontsao.com/llms.txt
Use this file to discover all available pages before exploring further.
PROVISIONER_BACKEND in your .env file. Docker is the default and requires no additional configuration, making it the fastest path to a running deployment. Proxmox and Kubernetes unlock stronger isolation and horizontal scale for production environments.
Docker
Single-host deployments, local development, and quick evaluation. Zero extra configuration required.
Proxmox
Stronger VM-level isolation, private fleet management, and on-premises infrastructure control.
Kubernetes
Cloud-native deployments, horizontal scaling, and container-native agent workloads.
Comparing backends
| Docker | Proxmox | Kubernetes | |
|---|---|---|---|
| Deployment scale | Single host | Private VM fleet | Multi-node cluster |
| Agent isolation | Container | Full VM | Container (namespace-scoped) |
| Setup complexity | Low — no extra vars required | Medium — Proxmox API credentials and a VM template | Medium to high — existing cluster required |
| Best for | Local dev, evaluation, lean self-hosted | On-prem, security-conscious operators, private infrastructure | Cloud-native teams, AWS/Azure/GCP rollouts |
| Horizontal scaling | No | Limited (manual fleet) | Yes |
Docker
Docker is the default backend. WhenPROVISIONER_BACKEND is unset or set to docker, Nora provisions each agent runtime as an isolated Docker container on the same host running the Nora stack.
MAX_VCPU, MAX_RAM_MB, MAX_DISK_GB, MAX_AGENTS).
Proxmox
The Proxmox backend provisions each agent runtime as a full virtual machine on a Proxmox VE node. This gives you stronger isolation than containers and full control over the underlying VM configuration. Set the following variables in your.env:
| Variable | Description |
|---|---|
PROXMOX_API_URL | Full URL to your Proxmox API endpoint. |
PROXMOX_TOKEN_ID | API token identifier in user@pam!tokenname format. |
PROXMOX_TOKEN_SECRET | Secret for the API token above. |
PROXMOX_NODE | Name of the Proxmox node where VMs are created. Defaults to pve. |
PROXMOX_TEMPLATE | VM template cloned as the base image for each new agent. Defaults to ubuntu-22.04-standard. |
Create an API token in the Proxmox web UI under Datacenter → Permissions → API Tokens. The token user needs sufficient privileges to clone VMs from the configured template and manage VM lifecycle on the target node.
When to choose Proxmox
Choose Proxmox when you need:- VM-level isolation between agent workloads
- An existing Proxmox cluster you want to leverage
- On-premises infrastructure where you control the hardware
- Stronger security boundaries than container namespaces provide
Kubernetes
The Kubernetes backend schedules agent workloads as pods inside a Kubernetes cluster. It is best suited for cloud-native operators already running Kubernetes on AWS, Azure, GCP, or on-premises. Set the following variables in your.env:
| Variable | Required | Default | Description |
|---|---|---|---|
K8S_NAMESPACE | No | openclaw-agents | Namespace where agent pods and services are created. |
K8S_EXPOSURE_MODE | No | cluster-ip | How agent services are exposed. Use cluster-ip for standard in-cluster access and node-port for local kind-based verification. |
K8S_RUNTIME_NODE_PORT | No | — | Node port for the agent runtime service. Only applicable in node-port mode. |
K8S_GATEWAY_NODE_PORT | No | — | Node port for the agent gateway service. Only applicable in node-port mode. |
K8S_RUNTIME_HOST | No | — | Hostname or IP that the Nora control plane uses to reach agent runtimes in node-port mode. |
Exposure modes
- cluster-ip (default)
- node-port (local kind)
Agent services use
ClusterIP and are reachable only from within the cluster. Use this for all standard cloud and on-premises Kubernetes deployments where the Nora control plane runs inside the same cluster.When to choose Kubernetes
Choose Kubernetes when you need:- Horizontal scaling of agent workloads across multiple nodes
- Native integration with cloud provider managed services (EKS, AKE, GKE)
- Container-native deployment on infrastructure already managed by Kubernetes
- Standard Kubernetes tooling for observability, autoscaling, and policy enforcement