MpegFlowBlogBack to home
← Stack integrations·compute

MpegFlow on Kubernetes: production deployment topology

How MpegFlow runs on Kubernetes — Helm chart deployment, KEDA autoscaling, the operator pattern for multi-tenant pools, and the network policy enforcement that makes strict-broker security work.

Stack integration · Kubernetes·Kubernetes ↗

Kubernetes is where MpegFlow runs at production scale. The workload shape (variable-throughput, queue-driven, periodic-spike) maps naturally onto K8s + KEDA, and multi-tenant pool isolation needs the operator pattern that K8s makes idiomatic. The full reference architecture is documented separately; this page covers the integration shape from the operator-platform perspective.

How the integration works

MpegFlow runs as a set of Deployments + StatefulSets + Services + KEDA ScaledObjects + a custom Operator that manages WorkerPool CRDs. The Helm chart ships these as a unit; you install once per cluster. KEDA scales worker pools based on Redis queue depth. NetworkPolicy enforces that workers can only reach the coordinator + their assigned storage endpoint — the network-level enforcement that makes strict-broker security architectural rather than convention-based.

Common patterns

  • Single-cluster deployment

    Standard pattern: one cluster (managed EKS/GKE/AKS or self-hosted), MpegFlow Helm chart installed once. KEDA scales worker pools 0→N based on queue depth. Suitable for single-region operators up to ~5M minutes/month.

  • Multi-cluster federation for dedicated tenants

    For Enterprise tier with dedicated-cluster-per-customer deployments, customers run their own K8s cluster + MpegFlow Helm chart. The control plane runs in the customer's cluster; data plane never crosses customer boundaries. Operationally complex; only justified at enterprise contract scale.

  • GitOps deployment via ArgoCD/Flux

    Most production K8s shops manage deployments via GitOps. MpegFlow's Helm chart commits to a cluster manifest repo; ArgoCD/Flux applies. Upgrade is a Helm chart version bump in the manifest repo + a sync; rollback is a git revert.

  • Hybrid GPU + CPU pools

    Different K8s node groups host different worker types. CPU node groups run libx264 encoding; GPU node groups (NVIDIA T4 / A10) run NVENC for high-throughput H.264/HEVC. WorkerPool CRDs target specific node groups via nodeSelector + tolerations.

Pitfalls

  1. PostgreSQL HA on K8s is itself a multi-week project. Pair with managed Postgres (RDS, Cloud SQL, Aiven) instead of running stateful Postgres in-cluster.
  2. NetworkPolicy enforcement requires a CNI that implements it (Calico, Cilium, AWS VPC CNI in NetworkPolicy mode). Default cluster networking won't enforce strict-broker security.
  3. KEDA polling frequency is a tradeoff: too fast wastes API calls; too slow lags scale-up during spike events. Default 30-second polling is good for most production workloads.
  4. Operator leader election needs persistent storage for lease objects. ConfigMap-based leader election works but is rate-limited; use Lease objects in K8s 1.20+.
  5. Drain semantics for GPU pods are different — GPU operator must release the device cleanly. Set terminationGracePeriodSeconds high enough for in-flight encodes to complete.

At production scale

Single-cluster MpegFlow deployments scale comfortably to 5-10M minutes/month with proper node-group sizing. Above that, the bottleneck shifts from compute to either Redis queue throughput (mitigated by Redis Cluster) or PostgreSQL write throughput (mitigated by partitioning). For 50M+ minutes/month, multi-cluster federation becomes the architecture — and that's where the operator-pattern abstraction pays back, because each cluster runs the same Helm chart with different WorkerPool CRDs.

Topics
  • Kubernetes
  • k8s
  • deployment
  • integration
  • compute
Building this stack?

Talk to us about your specific shape.

The integration patterns above cover most production deployments. If your shape is different — sovereign-cloud, regulated workloads, or scale that needs custom routing — beta cohort design partners get founder-direct help with the integration.

Apply Other integrations
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy