Self-Hosting with Docker
Deploy apps from the PodWarden Hub catalog on any server using Docker Compose — no PodWarden instance required
Every app in the PodWarden Hub catalog can be installed on any Linux server with Docker — just Docker, a server, and a one-line command. This guide covers everything you need to go from zero to running self-hosted apps.
This approach works great for a single server with a handful of apps. As your setup grows — more servers, dozens of apps, backups to manage, domains to configure, updates to coordinate — things get harder to keep track of manually. That's where PodWarden comes in: it gives you a single dashboard to deploy, monitor, update, and back up applications across all your servers, with built-in ingress, DDNS, and secret management. You can start with plain Docker today and add PodWarden later without changing anything already running.
What you need
- A Linux server (physical, VM, or VPS) with SSH access
- Docker Engine 20.10+ and Docker Compose v2
curlandopenssl(pre-installed on most Linux distributions)
That's it. No Kubernetes, no orchestration platform, no cloud account.
Hardware
You don't need much. Self-hosted apps are lightweight — a single server can comfortably run 20-30 containers.
| Hardware | Cost | Power draw | Good for |
|---|---|---|---|
| Intel N100 mini PC (Beelink, Minisforum) | $120-160 | 6-10W | Most people — runs 20-30+ containers, silent, cheap to operate |
| Raspberry Pi 4/5 | $35-80 | 3-5W | Light workloads, learning, Pi-hole |
| Used office PC (Dell Optiplex, HP EliteDesk) | $100-200 | 30-50W | More RAM and expandable storage |
| Old laptop | Free | 15-40W | Built-in UPS (battery), built-in screen for emergencies |
| VPS (Hetzner, Oracle Free Tier) | $0-5/mo | N/A | Remote access, no home network exposure |
Storage matters more than CPU. Most apps use minimal CPU, but media libraries (Jellyfin), photo backups (Immich), and databases grow fast. Start with at least 256 GB; 1 TB is better if you plan to host media.
Installing Docker
If Docker isn't installed yet:
curl -fsSL https://get.docker.com | sh
sudo usermod -aG docker $USER
# Log out and back in for the group change to take effectVerify:
docker --version
docker compose versionInstalling an app from the catalog
One-line installer
Browse the catalog, find an app, click Self-host with Docker, and copy the install command:
curl -fsSL https://www.podwarden.com/api/v1/catalog/install/<app>/script | bashThe script does the following:
- Downloads a
.tar.gzbundle containingdocker-compose.yml,.env.example, and any config files - Extracts to
/opt/<app>/ - Copies
.env.exampleto.env(only if.envdoesn't already exist) - Generates random passwords and secret keys for fields marked as auto-generatable
- Runs
docker compose up -dto start the app
Options:
# Install to a custom directory
curl -fsSL .../script | bash -s -- --dir /srv/myapp
# Skip automatic secret generation (fill in .env manually)
curl -fsSL .../script | bash -s -- --no-generate-secretsManual install (download bundle)
If you prefer to inspect everything before running it:
# Download the bundle
curl -fsSL "https://www.podwarden.com/api/v1/catalog/install/<app>/bundle" -o app.tar.gz
# Extract
tar xzf app.tar.gz
cd <app>/
# Review the files
cat docker-compose.yml
cat .env.example
# Create your .env and fill in required values
cp .env.example .env
nano .env
# Start
docker compose up -dUnderstanding what you just installed
Every bundle contains the same structure:
myapp/
docker-compose.yml # Service definitions, ports, volumes
.env.example # Environment variables with defaults and descriptions
config/ # (optional) Configuration files mounted into containersdocker-compose.yml
This file defines which containers to run, what ports they listen on, what volumes they use, and how they connect to each other. You generally don't need to edit it — all customization happens through the .env file.
The .env file
The .env file is where you configure the app. Each line is a KEY=value pair. Comments explain what each variable does:
# Database password (auto-generated on install)
DB_PASSWORD=a7f2k9x3m1 # REQUIRED | generate:password
# Application port
APP_PORT=8080
# SMTP server for email notifications
SMTP_HOST= # optionalImportant things to know about .env files:
- Variables in
.envare substituted intodocker-compose.ymlwherever${VARIABLE}appears - Lines starting with
#are comments - Don't use spaces around the
=sign - Don't wrap values in quotes unless the value itself contains spaces
- The
.envfile should never be committed to version control
Auto-generated secrets
Variables with a # generate: comment were filled in automatically by the installer. The supported strategies:
| Marker | What it generates |
|---|---|
generate:password | 18-character alphanumeric password |
generate:hex32 | 32-byte hex string (64 characters) |
generate:hex64 | 64-byte hex string (128 characters) |
generate:base64 | 32-byte base64 string |
generate:uuid | UUID v4 |
If you used --no-generate-secrets or downloaded the bundle manually, these fields will be empty with a # REQUIRED comment. Fill them in before starting:
# Generate a random password yourself
openssl rand -base64 18 | tr -d '/+='Managing your app
Check status
cd /opt/myapp
docker compose psView logs
# Follow all logs
docker compose logs -f
# Follow logs for one service
docker compose logs -f app
# Last 100 lines
docker compose logs --tail=100Stop and start
docker compose stop # Stop containers (keep data)
docker compose start # Start stopped containers
docker compose restart # Restart
docker compose down # Stop and remove containers (data in volumes is preserved)Open a shell inside a container
docker compose exec app /bin/sh
# or for containers with bash:
docker compose exec app /bin/bashUpdating an app
cd /opt/myapp
# Pull the latest image
docker compose pull
# Recreate containers with the new image
docker compose up -d
# Clean up old images
docker image prune -fBefore updating: always check the app's changelog for breaking changes. Major version upgrades (e.g., v5 to v6) may require database migrations or config changes.
Pin your image versions
The docker-compose.yml from the catalog specifies an image tag. If it uses latest, consider pinning to a specific version for stability:
# Risky — "latest" can jump to a new major version
image: redmine:latest
# Safer — stays within the 6.x series
image: redmine:6
# Safest — exact version
image: redmine:6.0.8Pinning means you choose when to upgrade, rather than getting surprised by a breaking change after docker compose pull.
Networking basics
Accessing your app
After docker compose up -d, the app is available at:
http://<your-server-ip>:<port>The port is defined in docker-compose.yml under ports:. For example, 8080:3000 means the app listens on the server's port 8080 and forwards to port 3000 inside the container.
How containers talk to each other
Docker Compose creates a private network for each compose file. Containers in the same compose file can reach each other by service name:
services:
app:
image: myapp
environment:
# Use the service name "db", not "localhost"
DATABASE_URL: postgres://user:pass@db:5432/mydb
db:
image: postgres:16The localhost trap: Inside a container, localhost means the container itself — not the host machine, not other containers. This is the single most common networking mistake. Always use the service name.
Use the container port, not the host port. If your database maps 5433:5432, other containers still connect on port 5432 (the container port). The 5433 is only for accessing it from outside Docker.
Accessing services on the host
If a container needs to reach something running on the host machine (not in Docker), add this to the service:
extra_hosts:
- "host.docker.internal:host-gateway"Then use host.docker.internal as the hostname inside the container.
Persistent data
Containers are ephemeral — when you remove a container, any data written inside it is lost. Volumes are how data survives container restarts and updates.
Named volumes vs bind mounts
services:
db:
image: postgres:16
volumes:
# Named volume (Docker manages the storage)
- pgdata:/var/lib/postgresql/data
# Bind mount (maps a host directory into the container)
- ./config/pg.conf:/etc/postgresql/postgresql.conf
volumes:
pgdata: # Must be declared here for named volumes| Type | Syntax | Where data lives | Best for |
|---|---|---|---|
| Named volume | pgdata:/path | /var/lib/docker/volumes/pgdata/ | Database files, application data |
| Bind mount | ./data:/path | Wherever you point it on the host | Config files, data you want to access directly |
Rule of thumb: Use named volumes for data Docker manages (databases, internal state). Use bind mounts for files you want to see, edit, or back up easily from the host.
Don't forget the ./ prefix
This is a classic mistake:
# Creates a NAMED VOLUME called "data" (Docker manages it)
- data:/app/data
# Creates a BIND MOUNT from the ./data directory (what you probably wanted)
- ./data:/app/dataWithout ./, Docker silently creates a named volume instead of mapping your directory.
Backups
Self-hosting means you are responsible for your data. No one else has a copy.
What to back up
.envfile — Contains your passwords and configuration- Docker volumes — Contains application data (databases, uploads, configs)
- Config files — Any bind-mounted configuration files
Simple backup strategy
#!/bin/bash
# backup.sh — run from the app directory
APP_DIR="/opt/myapp"
BACKUP_DIR="/backups/myapp"
DATE=$(date +%Y%m%d-%H%M%S)
mkdir -p "$BACKUP_DIR"
# Stop the app to ensure consistent data
cd "$APP_DIR"
docker compose stop
# Back up the entire app directory (includes .env, config, bind mounts)
tar czf "$BACKUP_DIR/myapp-$DATE.tar.gz" -C /opt myapp
# Back up named volumes
for vol in $(docker volume ls -q | grep myapp); do
docker run --rm -v "$vol:/data" -v "$BACKUP_DIR:/backup" \
alpine tar czf "/backup/vol-$vol-$DATE.tar.gz" -C /data .
done
# Restart
docker compose start
echo "Backup complete: $BACKUP_DIR"Database backups
For databases, don't just copy volume files — use the database's own dump tool:
# PostgreSQL
docker compose exec db pg_dump -U postgres mydb > backup.sql
# MySQL/MariaDB
docker compose exec db mysqldump -u root -p mydb > backup.sql
# SQLite (just copy the file while the app is stopped)
docker compose stop
cp ./data/app.db ./backups/app-$(date +%Y%m%d).db
docker compose startThe 3-2-1 rule
- 3 copies of your data
- 2 different storage types (e.g., local disk + cloud)
- 1 offsite copy
Use rclone to sync backups to cloud storage (S3, Backblaze B2, Google Drive) on a cron schedule.
Test your backups
A backup that you've never restored from is not a backup — it's a hope. Periodically test by restoring to a temporary directory and verifying the data is intact.
Making apps accessible from the internet
By default, your app is only reachable from your local network. To access it from outside:
Option 1: PodWarden (recommended)
If you're using PodWarden, ingress and HTTPS are handled for you. PodWarden includes a built-in reverse proxy, automatic TLS certificates, and free DDNS subdomains through PodWarden Hub — so you can expose any app with a public URL without buying a domain, configuring DNS, or managing certificates yourself.
Point-and-click in the dashboard: pick an app, assign a domain or DDNS subdomain, and PodWarden configures the ingress rule, provisions the TLS certificate, and keeps it renewed. See the Ingress & DDNS guide for details.
Option 2: VPN (simple, private)
Tailscale creates an encrypted mesh network between your devices. Install it on your server and your phone/laptop — all your self-hosted apps become accessible from anywhere, with no ports opened to the internet.
# Install Tailscale on your server
curl -fsSL https://tailscale.com/install.sh | sh
sudo tailscale upYour server gets a Tailscale IP (e.g., 100.x.y.z). Access apps at http://100.x.y.z:8080 from any device on your Tailscale network. Great for personal access when you don't need a public URL.
Option 3: Cloudflare Tunnel (no public IP needed)
If you're behind CGNAT or don't have a static IP, Cloudflare Tunnel routes traffic through Cloudflare's network — no ports to open, no IP to expose. Free for personal use.
Option 4: Manual reverse proxy
If you're not using PodWarden and need public access, you'll need to set up a reverse proxy yourself:
Caddy (simplest):
myapp.example.com {
reverse_proxy localhost:8080
}Caddy automatically obtains and renews Let's Encrypt certificates. Two lines of config for production-ready HTTPS.
Nginx Proxy Manager: GUI-based if you prefer clicking over config files.
Traefik: Auto-discovers Docker containers via labels. Most powerful, steepest learning curve.
All of these require you to own a domain, configure DNS records, and manage the proxy configuration yourself — which is exactly the kind of operational overhead PodWarden eliminates.
What not to expose
Never expose admin panels, database ports, or unauthenticated APIs directly to the internet. Always put a reverse proxy with HTTPS in front, and consider adding authentication (Authelia, Authentik) for apps that don't have built-in login.
Common problems and fixes
Port already in use
Error: bind: address already in useSomething else is using the same port. Find it:
sudo ss -tlnp | grep :8080Fix: either stop the conflicting service or change the port in your .env file.
Permission denied on volumes
Containers often run as a non-root user. If volume files are owned by root, the app can't read them.
Fix: Many images support PUID and PGID environment variables (LinuxServer.io images especially). Set them to your user's ID:
# Find your UID and GID
id
# uid=1000(user) gid=1000(user)PUID=1000
PGID=1000Container can't resolve DNS
If containers can't reach the internet:
# Test from inside the container
docker compose exec app ping -c1 google.comFix: Add DNS servers to the service in docker-compose.yml:
dns:
- 8.8.8.8
- 1.1.1.1Disk filling up
Docker accumulates old images, stopped containers, and build cache silently.
# See what's using space
docker system df
# Clean everything unused
docker system prune -a
# Set up automatic log rotation in /etc/docker/daemon.json
{
"log-driver": "json-file",
"log-opts": {
"max-size": "10m",
"max-file": "3"
}
}
# Then restart Docker: sudo systemctl restart dockerYAML syntax errors
docker-compose.yml uses YAML, which is whitespace-sensitive:
- Use spaces, never tabs. YAML forbids tabs entirely.
- Consistent indentation. Use 2 spaces per level.
- Validate before running:
docker compose configchecks syntax and shows the resolved config.
App crashes on startup
# Check why it crashed
docker compose logs app --tail=50
# Common causes:
# - Required .env variable is empty
# - Database not ready yet (add healthcheck + depends_on condition)
# - Port conflict
# - Volume permissionsOrganizing multiple apps
If you self-host more than a few apps, keep each one in its own directory with its own compose file:
/opt/
uptime-kuma/
docker-compose.yml
.env
nextcloud/
docker-compose.yml
.env
config/
postgres/
docker-compose.yml
.envDon't put everything in one giant compose file. Separate stacks mean:
- Restarting one app doesn't affect the others
- Each app has its own
.envwith only the variables it needs - Updates and rollbacks are isolated
- Easier to back up and restore individual apps
When to consider PodWarden
Everything above works well for one server with a few apps. But if you find yourself:
- SSH-ing into multiple servers to check on different apps
- Writing cron jobs for backups and forgetting which server has which schedule
- Manually editing Caddyfiles or nginx configs every time you add a domain
- Losing track of which app versions are running where
- Dreading updates because you're not sure if something will break
Then you've outgrown manual Docker management. PodWarden handles all of this from a single dashboard — deploy apps, manage ingress and TLS, schedule backups, and monitor every server in one place. It runs on K3s (lightweight Kubernetes) but you don't need to learn Kubernetes — PodWarden abstracts it away.
Your existing Docker apps keep running. PodWarden manages new deployments alongside them.
Next steps
- Browse the PodWarden Hub catalog for apps to deploy
- Read about creating custom templates to package your own Docker Compose stacks
- Get started with PodWarden to manage apps across multiple servers from one dashboard