PodWarden
Guides

Creating Custom Templates

Convert your Docker containers, Dockerfiles, and docker-compose files into PodWarden stacks

Overview

PodWarden's catalog includes 2,500+ curated templates, but the catalog is a starting point — not a limit. Any software that runs in Docker can become a PodWarden stack.

This guide shows how to convert existing Docker containers into PodWarden templates, with best practices for environment variables, storage, ports, resource requirements, and local testing.

What Is a Stack?

A stack is a reusable definition that describes how to run a containerized workload. It includes:

  • Container image — what to run
  • Resource requirements — CPU, memory, GPU
  • Environment variables — configuration (static and configurable)
  • Config files — mountable configuration files with per-deployment editing
  • Ports — which ports to expose
  • Volume mounts — persistent storage
  • Scheduling — node selectors, tolerations, network requirements

See Apps & Stacks for full field reference.

From Docker to PodWarden

If you already have a working docker run command, Dockerfile, or docker-compose.yml, mapping to PodWarden is straightforward:

Docker ConceptPodWarden FieldNotes
Image nameimage_namee.g. postgres, ollama/ollama
Image tagimage_tage.g. 16-alpine, latest
-p 8080:80ports[{"containerPort": 80, "protocol": "TCP"}]
-e KEY=valueenv (static)Fixed values baked into the template
-e KEY (configurable)env_schemaDocumented variables operators can override
-v /data:/app/datavolume_mountsPVC, NFS, hostPath, etc.
-v config.yml:/app/config.ymlconfig_schemaEditable per deployment (see Config Files)
--gpus allgpu_countNumber of NVIDIA GPUs
--memory 2gmemory_requeste.g. 2Gi
--cpus 2cpu_requeste.g. 2 or 2000m
--network hosthost_networkBinds directly to node's network stack
--restart alwaysKind: deploymentLong-running workloads auto-restart
--privilegedsecurity_context{"privileged": true}
--cap-add NET_ADMINsecurity_context{"capabilities": {"add": ["NET_ADMIN"]}}
healthcheckprobesLiveness/readiness probes (see Health Probes)

Converting a docker run Command

Given:

docker run -d \
  --name myapp \
  -p 8080:80 \
  -e DATABASE_URL=postgres://... \
  -e LOG_LEVEL=info \
  -v /data/myapp:/app/data \
  --memory 1g --cpus 1 \
  myregistry/myapp:v2.1

The PodWarden template would be:

FieldValue
NameMy App
Kinddeployment
Imagemyregistry/myapp
Tagv2.1
CPU Request1
Memory Request1Gi
Ports[{"containerPort": 80, "protocol": "TCP"}]
Env (static)[{"name": "LOG_LEVEL", "value": "info"}]
Env Schema[{"name": "DATABASE_URL", "required": true, "description": "PostgreSQL connection string"}]
Volume MountsPVC or NFS mount at /app/data

Note that DATABASE_URL goes into env_schema (configurable) rather than env (static) because it differs per deployment. LOG_LEVEL is static because it's the same everywhere.

Converting docker-compose Services

There are two ways to use docker-compose files with PodWarden:

Option 1: Import as Compose Stack (recommended)

PodWarden can import a docker-compose.yml directly as a compose stack — a multi-service template where each service is deployed together. Click Import from Compose on the Apps & Stacks page and paste the compose file.

PodWarden automatically translates compose directives to Kubernetes equivalents: ports, volumes, environment variables, health checks, capabilities, and resource limits. Add an x-podwarden extension block for PodWarden-specific metadata like configurable form fields and pre-deploy scripts.

See Compose Stacks for the full translation table and examples.

Option 2: Manual conversion

Each service in a docker-compose file can also be converted manually into a separate PodWarden template. Shared networks and depends_on don't map directly — in PodWarden, services communicate via Kubernetes service DNS names within the same cluster.

# docker-compose.yml
services:
  app:
    image: myapp:latest
    ports:
      - "3000:3000"
    environment:
      - DATABASE_URL=postgres://db:5432/mydb
    depends_on:
      - db

  db:
    image: postgres:16
    environment:
      - POSTGRES_DB=mydb
      - POSTGRES_USER=app
      - POSTGRES_PASSWORD=secret
    volumes:
      - pgdata:/var/lib/postgresql/data

This becomes two PodWarden templates:

  1. PostgreSQL — image postgres, tag 16, env schema for credentials, PVC volume at /var/lib/postgresql/data
  2. My App — image myapp, env schema with DATABASE_URL pointing to the PostgreSQL service name in the cluster

Best Practices

Environment Variables: Static vs. Configurable

Static (env) — values that are the same for every deployment:

[
  { "name": "OLLAMA_HOST", "value": "0.0.0.0" },
  { "name": "TZ", "value": "UTC" }
]

Configurable (env_schema) — values that operators set per deployment:

[
  {
    "name": "POSTGRES_PASSWORD",
    "required": true,
    "description": "Database superuser password",
    "generate": "password"
  },
  {
    "name": "POSTGRES_DB",
    "required": false,
    "default_value": "app",
    "description": "Default database name"
  }
]

Rule of thumb: if the value changes between environments (staging vs production, different customers), put it in env_schema. If it's always the same, put it in env.

Auto-generated secrets (generate)

Add a generate hint to env_schema entries for passwords, secret keys, and other values that should be randomly generated on first install. When the Hub's one-click installer runs, it uses openssl rand to fill these in automatically.

StrategyOutputUse for
password18-byte alphanumericUser-facing passwords
hex1616-byte hex stringShort tokens
hex3232-byte hex stringAPI keys
hex6464-byte hex stringEncryption keys, Rails secret keys
base6432-byte base64 stringGeneric secrets
uuidUUID v4Unique identifiers

The generate field is optional. If omitted, the installer leaves the field empty with a # REQUIRED comment for the user to fill in manually. Always add generate to password and secret key fields — it's the difference between a smooth one-click install and the user having to manually edit .env files.

For secrets (passwords, API keys, tokens), use PodWarden's secret references (secret_refs) instead of hardcoding values. See Apps & Stacks → Secret References.

Storage

Choose the right volume type based on your data:

Use CaseVolume TypeExample
Database filesPVCPostgreSQL data dir
Shared mediaNFSVideo files across multiple services
Temporary cacheemptyDirBuild artifacts, temp files
Config filesconfigMapnginx.conf, app config
Large ML modelsPVC (large) or NFSOllama model storage

See the Storage guide for details on configuring each type.

Key points:

  • Always use persistent storage for stateful workloads (databases, file stores)
  • Set appropriate size requests — PVCs are allocated from the cluster's storage class
  • Use NFS when multiple workloads need to access the same data
  • Tag storage connections with network types so PodWarden can validate connectivity before deployment

Ports

Only expose ports that external clients or other services need to reach:

[
  { "containerPort": 8080, "protocol": "TCP" },
  { "containerPort": 9090, "protocol": "TCP" }
]
  • Don't expose internal-only ports (metrics, health checks) unless you need them
  • PodWarden creates a Kubernetes Service for each exposed port
  • For workloads that need host networking (e.g. DLNA discovery, mDNS), set required_network_types to include lan

Resource Requests

Set realistic resource requests so PodWarden can place workloads on appropriate servers:

Workload TypeCPUMemoryGPU
Small web app250m256Mi0
Database121Gi4Gi0
Media server242Gi8Gi0
AI inference488Gi32Gi1–2
LLM serving4816Gi64Gi1+

Under-requesting causes OOM kills. Over-requesting wastes cluster capacity. Start with what the application actually needs and adjust based on monitoring.

For GPU workloads, set both gpu_count and vram_request so PodWarden can match the workload to a server with sufficient GPU memory.

Local Testing with .env Files

Before deploying to PodWarden, test your configuration locally with Docker:

# .env file (gitignored)
POSTGRES_PASSWORD=devpassword
POSTGRES_DB=myapp_dev
DATABASE_URL=postgres://app:devpassword@localhost:5432/myapp_dev
docker run --env-file .env -p 5432:5432 postgres:16

This validates that your environment variables and port mappings work before creating the PodWarden template. The same variables map directly to env_schema entries.

Important: Never commit .env files to version control. Add .env to your .gitignore.

Examples

Example 1: Uptime Kuma (Simple Monitoring)

A minimal workload — one port, one volume, no environment variables.

Docker command:

docker run -d -p 3001:3001 -v uptime-kuma:/app/data louislam/uptime-kuma:1

PodWarden template:

FieldValue
NameUptime Kuma
Kinddeployment
Imagelouislam/uptime-kuma
Tag1
CPU Request250m
Memory Request256Mi
Ports[{"containerPort": 3001, "protocol": "TCP"}]
Volume MountsPVC at /app/data, 1Gi

No environment variables needed. Uptime Kuma stores all configuration in its SQLite database at /app/data.


Example 2: PostgreSQL (Database with Credentials)

A stateful workload with configurable credentials and persistent storage.

Docker command:

docker run -d \
  -e POSTGRES_USER=app \
  -e POSTGRES_PASSWORD=secretpassword \
  -e POSTGRES_DB=production \
  -p 5432:5432 \
  -v pgdata:/var/lib/postgresql/data \
  postgres:16-alpine

PodWarden template:

FieldValue
NamePostgreSQL
Kinddeployment
Imagepostgres
Tag16-alpine
CPU Request1
Memory Request1Gi
Ports[{"containerPort": 5432, "protocol": "TCP"}]

Static env:

[
  { "name": "PGDATA", "value": "/var/lib/postgresql/data/pgdata" }
]

Env schema (configurable):

[
  {
    "name": "POSTGRES_USER",
    "required": true,
    "default_value": "app",
    "description": "Database superuser name"
  },
  {
    "name": "POSTGRES_PASSWORD",
    "required": true,
    "description": "Database superuser password"
  },
  {
    "name": "POSTGRES_DB",
    "required": false,
    "default_value": "app",
    "description": "Default database name created on first run"
  }
]

Volume: PVC at /var/lib/postgresql/data, 10Gi minimum.

In production, use secret_refs for POSTGRES_PASSWORD instead of env_schema to store it in PodWarden's encrypted secret store.


Example 3: Ollama (GPU Workload)

An AI inference server that requires GPU access and large model storage.

Docker command:

docker run -d \
  --gpus=1 \
  -p 11434:11434 \
  -e OLLAMA_HOST=0.0.0.0 \
  -v ollama:/root/.ollama \
  ollama/ollama

PodWarden template:

FieldValue
NameOllama
Kinddeployment
Imageollama/ollama
Taglatest
CPU Request4
Memory Request8Gi
GPU Count1
VRAM Request8Gi
Ports[{"containerPort": 11434, "protocol": "TCP"}]

Static env:

[
  { "name": "OLLAMA_HOST", "value": "0.0.0.0" }
]

Env schema:

[
  {
    "name": "OLLAMA_MODELS",
    "required": false,
    "default_value": "/root/.ollama/models",
    "description": "Path where Ollama stores downloaded models"
  },
  {
    "name": "OLLAMA_NUM_PARALLEL",
    "required": false,
    "default_value": "1",
    "description": "Number of parallel inference requests"
  }
]

Volume: PVC at /root/.ollama, 50Gi+ (models can be 4–70GB each).

PodWarden will automatically place this workload on a server with at least 1 NVIDIA GPU and 8GB VRAM. Set vram_request based on the largest model you plan to run (7B models need ~8GB, 70B models need ~48GB).


Example 4: Nextcloud (Complex — Multiple Volumes, External DB, Network)

A full-featured workload with multiple storage backends, external service dependencies, and network requirements.

Docker command:

docker run -d \
  -p 80:80 \
  -e POSTGRES_HOST=db.lan.local \
  -e POSTGRES_DB=nextcloud \
  -e POSTGRES_USER=nextcloud \
  -e POSTGRES_PASSWORD=secret \
  -e REDIS_HOST=redis.lan.local \
  -e NEXTCLOUD_ADMIN_USER=admin \
  -e NEXTCLOUD_ADMIN_PASSWORD=adminpass \
  -e NEXTCLOUD_TRUSTED_DOMAINS=cloud.example.com \
  -v nextcloud:/var/www/html \
  -v /mnt/nfs/media:/var/www/html/data \
  nextcloud:29-apache

PodWarden template:

FieldValue
NameNextcloud
Kinddeployment
Imagenextcloud
Tag29-apache
CPU Request2
Memory Request2Gi
Ports[{"containerPort": 80, "protocol": "TCP"}]
Required Network Types["lan"]

Static env:

[
  { "name": "APACHE_DISABLE_REWRITE_IP", "value": "1" },
  { "name": "OVERWRITEPROTOCOL", "value": "https" }
]

Env schema (configurable):

[
  {
    "name": "POSTGRES_HOST",
    "required": true,
    "description": "PostgreSQL hostname (must be reachable from the cluster)"
  },
  {
    "name": "POSTGRES_DB",
    "required": true,
    "default_value": "nextcloud",
    "description": "PostgreSQL database name"
  },
  {
    "name": "POSTGRES_USER",
    "required": true,
    "default_value": "nextcloud",
    "description": "PostgreSQL username"
  },
  {
    "name": "POSTGRES_PASSWORD",
    "required": true,
    "description": "PostgreSQL password"
  },
  {
    "name": "REDIS_HOST",
    "required": false,
    "description": "Redis hostname for session/file caching"
  },
  {
    "name": "NEXTCLOUD_ADMIN_USER",
    "required": true,
    "default_value": "admin",
    "description": "Admin account username (first run only)"
  },
  {
    "name": "NEXTCLOUD_ADMIN_PASSWORD",
    "required": true,
    "description": "Admin account password (first run only)"
  },
  {
    "name": "NEXTCLOUD_TRUSTED_DOMAINS",
    "required": true,
    "description": "Space-separated list of trusted domains"
  }
]

Volumes:

  1. PVC at /var/www/html — Nextcloud application files, 10Gi
  2. NFS at /var/www/html/data — user files on shared NFS storage

Why required_network_types: ["lan"]? Nextcloud connects to PostgreSQL and Redis on the LAN, and serves user files from an NFS share that's only reachable on the local network. Setting this ensures PodWarden only deploys to clusters with LAN connectivity, and warns if the NFS storage connection isn't reachable.

Use secret_refs for POSTGRES_PASSWORD and NEXTCLOUD_ADMIN_PASSWORD in production.

Next Steps