INFRASTRUCTURE
Why your 10-person dev team doesn't need Kubernetes (and what to use instead)
April 2, 2026 · 10 min read
CTO, Keni Engineering
Kubernetes dominates the infrastructure conversation. It is on every job listing, every conference agenda, every cloud provider's landing page. If you are a CTO or engineering lead at a growing company, you have probably wondered whether your team should adopt it. The short answer for most teams under 30 developers: no, not yet, and possibly not ever.
This is not an anti-Kubernetes post. Kubernetes is a remarkable piece of engineering that solves real problems at scale. But "at scale" is doing a lot of heavy lifting in that sentence. For a 10-person dev team running a handful of services, Kubernetes introduces complexity that far outweighs its benefits. The goal of this post is right-sizing: matching your infrastructure to your actual needs, not your aspirational ones.
The Kubernetes tax
Every technology choice comes with a cost, and Kubernetes has one of the highest total costs of ownership in the infrastructure world. We call it the Kubernetes tax: the ongoing price you pay in money, time, and cognitive load just to keep the platform running before you deploy a single application.
Financial cost
A managed Kubernetes cluster on EKS, GKE, or AKS starts at $70 to $75/month for the control plane alone. That is before you add worker nodes, load balancers, persistent volumes, or data transfer. A realistic small production cluster with 3 worker nodes runs $300 to $600/month in compute. Add monitoring, logging, and backup tooling, and a basic Kubernetes setup costs $500 to $1,000/month. For a team that could run the same workload on a $40 to $80/month VPS, that is a 10x cost increase with no corresponding benefit.
Engineering time
Self-hosted Kubernetes (k3s, kubeadm, or bare-metal) eliminates the managed service fee but adds operational burden. Certificate rotation, etcd backups, version upgrades, node drains, CNI plugin issues. These tasks consume 20 to 30% of an engineer's time on an ongoing basis. For a 10-person team, that is effectively losing one full engineer to infrastructure maintenance. That engineer could be shipping features, fixing bugs, or improving the product.
Learning curve
Kubernetes has a notoriously steep learning curve. Pods, Deployments, Services, Ingresses, ConfigMaps, Secrets, PersistentVolumeClaims, NetworkPolicies, RBAC, Operators, CRDs, Helm charts, Kustomize. A developer who is productive with Docker in a week will need months to become comfortable with Kubernetes. Multiply that across your team, and the onboarding cost is substantial. Every new hire needs to learn not just your application, but your Kubernetes configuration, your custom operators, and your deployment pipeline.
Debugging complexity
When something goes wrong in a Docker Compose stack, the debugging surface is small. Check container logs, check resource usage, restart the container. When something goes wrong in Kubernetes, the debugging surface is enormous. Is it a scheduling issue? A networking issue? A resource quota? An RBAC permission? A DNS resolution failure? A kubelet problem on a specific node? We routinely see teams spend hours debugging Kubernetes-specific issues that would not exist in a simpler setup.
What Kubernetes actually solves
To be fair, Kubernetes solves real problems. The issue is not that Kubernetes is bad. The issue is that most small teams do not have the problems it solves. Here is where Kubernetes genuinely earns its complexity:
- Auto-scaling under variable load: if your traffic spikes 10x during peak hours and drops to baseline overnight, Kubernetes Horizontal Pod Autoscaler dynamically adjusts your replica count based on CPU, memory, or custom metrics. This saves real money at scale by not over-provisioning.
- Multi-region deployments: if compliance, latency, or disaster recovery requires running your application across multiple geographic regions, Kubernetes federation and multi-cluster tools provide the primitives to make this work.
- 50+ microservices: when you have dozens of independently deployed services with complex dependency graphs, Kubernetes's service discovery, load balancing, and rolling update mechanisms genuinely help manage the chaos.
- Zero-downtime rolling updates at scale: Kubernetes handles rolling deployments across hundreds of pods with health checks, readiness probes, and automatic rollback. At scale, this is hard to replicate with simpler tools.
- A dedicated platform team: if you have 2 to 3 engineers whose full-time job is maintaining the platform, the investment in Kubernetes expertise makes sense. Their knowledge compounds over time and serves the whole engineering organization.
If you read that list and thought "none of that applies to us," you are in the majority. Most teams with 2 to 30 developers have predictable traffic, fewer than 20 services, a single region, and no dedicated platform team. That is not a criticism. It is a description of a healthy, growing company that should be focused on product, not infrastructure.
What most small teams actually need
Instead of Kubernetes, here is what actually moves the needle for small teams. These are the building blocks of a production-ready infrastructure that any developer can understand and maintain.
Containerized applications with Docker
Containers are the foundation. Every application should run in a Docker container with a well-written Dockerfile, pinned base images, multi-stage builds, and a .dockerignore file. This gives you reproducible builds, environment parity between development and production, and easy rollbacks by redeploying a previous image tag. Docker Compose ties multiple containers together into a coherent stack. One YAML file defines your entire application topology: web server, API, database, cache, worker queues. Any developer can read it, understand it, and modify it.
Automated CI/CD pipelines
Every push to main should trigger an automated pipeline that builds your container, runs tests, scans for vulnerabilities, and deploys to production. GitHub Actions is the obvious choice if your code lives on GitHub. GitLab CI works well for GitLab users. For teams that want self-hosted CI without vendor lock-in, Woodpecker CI is an excellent lightweight option that runs alongside your application stack. The key insight: you do not need Kubernetes to have automated deployments. A CI pipeline that builds an image, pushes it to a registry, and tells your server to pull and restart is all you need. Tools like Watchtower can even automate the pull-and-restart step.
A reverse proxy with automatic TLS
Traefik is our recommendation here. It automatically discovers your Docker containers, routes traffic to them based on labels, and provisions TLS certificates from Let's Encrypt. Zero manual certificate management, zero nginx config files to maintain. A single Traefik instance can route traffic to dozens of services on one server. It handles HTTPS termination, automatic HTTP-to-HTTPS redirects, and health checking. Nginx and HAProxy are solid alternatives, but Traefik's Docker integration makes it particularly well-suited for small teams.
Monitoring and alerting
Prometheus for metrics collection and Grafana for visualization is the industry standard, and it runs perfectly well as Docker containers alongside your application. Set up dashboards for the things that matter: request latency, error rates, CPU and memory usage, disk space, container health. Configure alerts for the things that require immediate attention: service down, error rate spike, disk filling up. This stack gives you the same observability that Kubernetes users get, without the Kubernetes overhead. You can set it up in an afternoon and it will run reliably for years.
Secrets management
Stop storing secrets in .env files committed to git. Use a proper secrets manager: 1Password for teams that want simplicity and a great developer experience, or HashiCorp Vault for teams that need fine-grained access control and audit logs. Both integrate with CI/CD pipelines, so your deployment process can pull secrets at build or deploy time without them ever touching version control.
Automated backups
Restic or BorgBackup for encrypted, deduplicated backups to S3 or any object storage. Run backups on a cron schedule, verify them regularly, and test restores quarterly. This is not glamorous, but it is the single most important piece of infrastructure you will ever set up. A database backup that runs every hour and ships to offsite storage has saved more companies than Kubernetes ever has.
The Docker Compose production stack
When you combine the building blocks above, you get a production stack that covers 90% of what small teams need. Here is what it looks like in practice:
- Traefik: sits at the edge, handles all incoming traffic, terminates TLS, routes requests to the right container based on domain or path. Configuration lives in Docker labels on your service containers.
- Your application containers: API, web frontend, worker processes, whatever your app needs. Each defined as a service in docker-compose.yml with resource limits, health checks, and restart policies.
- Database: PostgreSQL, MySQL, or MongoDB running as a container with data stored on a Docker volume. For production, consider a managed database service if your budget allows.
- Prometheus + Grafana: running as containers, scraping metrics from your application and infrastructure. Grafana dashboards give your team visibility into system health.
- Woodpecker CI: self-hosted CI/CD running alongside your stack. Builds images on push, runs tests, deploys automatically. No external CI service needed.
- Restic backup: scheduled container that backs up database dumps and critical volumes to S3-compatible storage every hour.
This entire stack runs on a single server. A $40 to $80/month VPS from Hetzner, DigitalOcean, or Hostinger handles a surprising amount of traffic. We have seen this setup serve 10,000+ requests per minute without breaking a sweat. For redundancy, add a second server with the same stack behind a DNS failover. Total cost: $80 to $160/month. Total Kubernetes manifests: zero.
The key advantage of this approach is transparency. Any developer on your team can read the docker-compose.yml file and understand exactly what is running, how it is connected, and how to modify it. There are no abstractions hiding complexity. No operators running reconciliation loops in the background. No custom resource definitions to decode. Just containers, networks, and volumes.
When to actually migrate to Kubernetes
We are not saying never use Kubernetes. We are saying wait until you have concrete reasons, not hypothetical ones. Here are the signals that indicate you have genuinely outgrown a simpler setup:
- Traffic spikes 10x or more: if your load varies dramatically and you are paying for peak capacity 24/7, auto-scaling will save real money. A steady 2x growth in traffic is not a reason. Just get a bigger server.
- 30+ independently deployed services: when your docker-compose.yml file is 500+ lines and deploys take 20 minutes because everything restarts, the coordination overhead justifies an orchestrator.
- Multi-region compliance requirements: GDPR data residency, SOC 2 geographic redundancy, or regulatory requirements that mandate multi-region deployments. This is a legitimate forcing function.
- You can afford a dedicated platform team: at least 2 to 3 engineers whose primary job is maintaining the platform. If you cannot dedicate this headcount, Kubernetes will be a burden, not an accelerator.
- Your deployment frequency demands it: if you are deploying 50+ times per day across multiple services and need sophisticated canary deployments, blue-green switching, and automatic rollback based on metrics, Kubernetes service mesh integrations (Istio, Linkerd) provide this natively.
Notice that "our investors expect us to use Kubernetes" and "we want to attract engineers who know K8s" are not on this list. Those are organizational decisions, not technical ones. Make infrastructure decisions based on what your application needs, not what looks good on a pitch deck.
The migration path is easier than you think
One common objection: "If we start with Docker Compose, won't we have to rewrite everything when we migrate to Kubernetes?" No. If your applications are properly containerized with Docker, moving to Kubernetes is a translation exercise, not a rewrite. Your Dockerfiles stay the same. Your application code stays the same. You are converting docker-compose.yml definitions into Kubernetes manifests (Deployments, Services, Ingresses). Tools like Kompose automate most of this translation.
The reverse is not true. If you start with Kubernetes and later realize it is too complex for your needs, simplifying is painful. You have Helm charts, operators, custom CRDs, and team knowledge deeply embedded in Kubernetes patterns. Downgrading infrastructure is always harder than upgrading it. Start simple. Upgrade when the pain of simplicity exceeds the cost of complexity.
Our recommendation
For teams with 2 to 30 developers, our recommendation is clear: start with Docker Compose, Traefik, and a solid monitoring stack. Get your CI/CD pipeline automated. Get your backups running and tested. Get your secrets out of git. These fundamentals matter far more than which orchestrator you use.
The companies we work with that have the most reliable infrastructure are not the ones using the most sophisticated tools. They are the ones using tools they fully understand, configured correctly, with good monitoring and tested backup procedures. A Docker Compose stack that every developer on your team can debug at 2 AM is worth more than a Kubernetes cluster that only one person understands.
Focus your engineering time on what makes your product unique. Let the infrastructure be boring, reliable, and simple. When you genuinely outgrow that simplicity, you will know, and the migration path will be straightforward.
Want to see what a right-sized infrastructure looks like for your team? Explore our platform engineering approach, start with an infrastructure audit, or talk to us about DevOps consulting to get hands-on help building the right stack for your team.