MpegFlowBlogBack to home
← Stack integrations·tooling

MpegFlow with Helm: chart structure and deployment

The MpegFlow Helm chart — what it deploys, how to configure it, and the values.yaml shape that production deployments override. From hello-world to multi-tenant.

Stack integration · Helm·Helm ↗

Helm packages MpegFlow's Kubernetes manifests into a single installable unit. The chart deploys: the coordinator API tier (Deployment + Service + Ingress), the worker pools (one Deployment + KEDA ScaledObject per pool), the operator (Deployment + ServiceAccount + RBAC + CRDs), the ConfigMaps + Secrets for configuration. One Helm install = a working MpegFlow control plane.

How the integration works

The chart structure follows kube-prometheus-stack conventions: parent chart with embedded subcharts for each component. values.yaml at the top defines deployment-wide settings (image versions, license JWT, database connection); values per subchart override component-specific settings. Helm install accepts a custom values.yaml that production deployments commit to a manifest repo.

Common patterns

  • GitOps deployment via ArgoCD/Flux

    The standard pattern: Helm chart referenced in an ArgoCD Application or Flux HelmRelease. Custom values.yaml in the deployment manifest repo. Changes commit to git → Argo/Flux auto-applies. Rollback is a git revert.

  • Per-environment values overlays

    production-values.yaml, staging-values.yaml, dev-values.yaml — different sizing, different image tags, different feature flags. Use --values flag stacking: helm install -f base.yaml -f production.yaml. The same chart serves all environments.

  • Chart hooks for migrations

    Pre-upgrade Helm hooks run database migrations before the new image rolls out. Post-upgrade hooks run smoke tests. Failed hooks block the rollout, preventing schema-mismatch incidents.

  • OCI registry for chart distribution

    Helm 3+ supports OCI registries (Docker Hub, GitHub Container Registry, ECR) for chart distribution. We publish MpegFlow charts to ghcr.io for design partners; Enterprise tier customers get private registries.

Pitfalls

  1. CRDs in Helm charts are tricky: Helm doesn't manage CRD lifecycle well. Install CRDs separately or use the helm.sh/hook annotations for explicit lifecycle.
  2. Secrets in values.yaml: never commit secrets to the manifest repo. Use external-secrets-operator or sealed-secrets to inject secrets at deploy time.
  3. Chart version drift: pin chart versions in your manifest repo. Auto-update via Renovate or Dependabot, but never auto-merge — review every chart upgrade.
  4. Helm rollback can leave orphaned resources. For complex stateful upgrades, Helm rollback isn't always sufficient; sometimes the answer is "deploy a new replicas alongside, switch traffic, delete old".
  5. Helm values.yaml inheritance through subchart imports gets complex. Document the override paths explicitly; production teams shouldn't have to read the chart source to know what to set.

At production scale

Helm-managed MpegFlow deployments scale across all production sizes — single-cluster small deployments to multi-cluster federated Enterprise tiers. The chart accommodates hundreds of WorkerPool CRDs without performance issues. For very large deployments (1000+ pools), the operator's reconcile loop becomes the bottleneck before Helm itself.

Topics
  • Helm
  • Kubernetes
  • deployment
  • tooling
  • integration
Building this stack?

Talk to us about your specific shape.

The integration patterns above cover most production deployments. If your shape is different — sovereign-cloud, regulated workloads, or scale that needs custom routing — beta cohort design partners get founder-direct help with the integration.

Apply Other integrations
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy