MpegFlowBlogBack to home
← Topics·3 pieces

Kubernetes

Running video transcoding on Kubernetes maps the workload's natural shape — variable-throughput, queue-driven, periodic-spike — onto K8s + KEDA primitives. These articles and reference architectures cover the four-pattern climb (Job per encode → worker Deployment + queue → KEDA queue-depth autoscaling → multi-tenant operator), pool-isolation strategies, leader election, drain semantics for upgrades without dropped encodes, and where each pattern stops scaling. For SREs and platform engineers running video on K8s.

Reference architectures · 2
  • Cost-aware spot-instance encoder pool
    Production architecture for running video transcoding on AWS Spot, GCP Preemptible, and Azure Spot instances. Interruption-tolerant queue topology, fleet diversification, atomic upload semantics, and the cost math that makes self-hosted video pipelines beat per-minute pricing at scale.
    May 9, 2026
  • Kubernetes deployment with KEDA autoscaling
    Production K8s topology for MpegFlow — API tier, shared workers via Helm, dedicated workers via Operator, KEDA queue-depth autoscaling, leader election, pool pause for cost savings.
    May 5, 2026
Engineering blog · 1
  • FFmpeg in Kubernetes: the pod, queue, and operator pattern
    How to run FFmpeg in Kubernetes at production scale — the four patterns we've watched teams climb, where each breaks, and why a video transcoder Kubernetes operator earns its keep above ~50K jobs/day.
    May 8, 2026
Adjacent topics
  • Autoscaling· 2
  • Operator· 2
  • Spot instances· 1
  • cost optimization· 2
  • FFmpeg· 3
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy