Personal Deployment Server
A self-hosted Linux server running Kubernetes (k3s), NGINX, and containerized services. Built to understand the full stack of production infrastructure — from bare metal to live deployments — on my own hardware.
Overview
This project is a self-hosted production-grade server running on physical hardware. The goal was to build and operate a real infrastructure environment — not a managed cloud platform — to develop genuine understanding of how production systems work beneath the abstractions.
The server runs k3s (lightweight Kubernetes), NGINX for ingress and routing, and multiple containerized services deployed via Docker. All configuration is declarative and version-controlled, so the entire environment can be rebuilt from scratch.
This serves as the deployment target for other projects including the Dracula AI Agent and the Splunk monitoring platform — making it a real platform, not just a demo.
Architecture
- Lightweight k3s distribution on Ubuntu
- Declarative manifests for all workloads
- Pod restarts and health checks
- Namespaced service isolation
- Reverse proxy for all services
- Host-based routing to containers
- Static file serving
- TLS termination
- Docker for image builds
- containerd runtime in k3s
- Multi-service deployments
- Image versioning by tag
- Ubuntu Server, hardened config
- Systemd service management
- SSH key auth, UFW firewall
- Automated system updates
Features
Deployment Workflow
kubectl apply -f deployment.yaml rolls out the new pod. k3s handles scheduling and health checks.kubectl get pods and confirm live routing via NGINX logs.