Frequently asked · MpegFlow
Honest answers.
Questions teams actually ask before deploying MpegFlow. Product scope, codec coverage, security, deployment, pricing, comparison to other vendors, and how to start. If something you need isn't here, the contact form goes to a real human.
01Product.
01What is MpegFlow?+
MpegFlow is a video pipeline engine that models transcoding and streaming workflows as a directed acyclic graph (DAG). Each stage — ingest, encode, QC, package, deliver — runs on a fleet of FFmpeg workers with retries, webhooks, and a per-job audit trail. The same binary runs as managed SaaS or self-hosted in your own datacenter.
02How is MpegFlow different from running FFmpeg directly?+
FFmpeg is the encoding workhorse — MpegFlow is the operational layer around it. We provide the queue, retry semantics by failure class, partial-success handling on ABR ladders, encoder-version pinning, multi-tenant security, and per-job audit trail. You bring the FFmpeg knowledge (codec choices, presets, QC rules); MpegFlow handles the orchestration that every video team otherwise rebuilds from scratch. The full reasoning is in our post on running FFmpeg at scale.
03Who is MpegFlow for?+
Engineering teams running video infrastructure at broadcast, OTT, archive, or premium VOD scale. Specifically: teams transcoding 10K+ minutes per month, dealing with ABR ladder generation, multi-rendition outputs, audit and compliance requirements, or self-hosted deployment needs. If your video volume is under 1K minutes/month or your workload is consumer UGC, managed services like Mux or Cloudflare Stream are usually a better fit.
04Is MpegFlow open source?+
MpegFlow is closed-source today, with an open-core path on the roadmap. The control plane (orchestration, audit, queues) is proprietary; we are evaluating open-sourcing select components — particularly the Kubernetes operator and a few worker-side tools — once the core product is stable. Note: there is a separate, unrelated MIT-licensed tool also called "mpegflow" on GitHub by Vadim Kantorov for motion-vector extraction — that is not us. If open source is a hard procurement requirement today, talk to us about scope and timing.
05Where is my video data stored when using MpegFlow?+
Your data stays in your storage. MpegFlow workers receive coordinator-issued presigned URLs to read mezzanine assets and write outputs directly to your S3-compatible bucket (AWS S3, Cloudflare R2, GCS via interop, MinIO, on-prem object storage). Our control plane never relays your bytes; it stores only job metadata, audit logs, and pipeline definitions. EU data residency is supported via region-pinned coordinator deployment; self-hosted runs entirely in your own datacenter or VPC, with zero data leaving your perimeter. The full pattern is documented in the strict-broker security architecture.
02Technical.
01What codecs does MpegFlow support today?+
H.264 and HEVC are fully supported in production. VP9 is partial. AV1 is on the roadmap for 2026 Q4. For HDR pipelines, HDR10 and HLG are supported through HEVC; Dolby Vision is on the roadmap. If your specific codec or container requirement isn't named here, ask us during onboarding — we either support it via FFmpeg directly or can scope adding it.
02Does MpegFlow handle ABR ladder generation and per-rendition retry?+
Yes. ABR ladder generation is first-class — define the rendition matrix in your workflow DAG, MpegFlow runs each rendition as a separate stage. If rendition 4 of 6 fails (typically OOM on a difficult input), only that rendition retries on a higher-memory pool. The other 5 stay encoded. The package stage waits for all to complete or fails deterministically with a clear "rendition X failed" event you can route to QC.
03What about live streaming, DRM packaging, and captions?+
These are on the roadmap. Live ingest (SRT, RTMP) ships in 2026 Q3 with DAG runtime extensions for streaming workloads. DRM packaging (Widevine, FairPlay, PlayReady via SPEKE) and EBU-TT captions ship in 2026 Q4. For DRM today, design partners pair MpegFlow with established providers like Vualto or EZDRM until native packaging arrives.
04How does MpegFlow handle multi-tenant security?+
Through a strict-broker pattern: workers have zerocredentials. Every file transfer goes through coordinator-issued presigned URLs with one-hour TTLs. A successful exploit of a worker process cannot reach other tenants' data because the worker has no path to it. Outbound webhooks are HMAC-SHA256 signed with replay-prevention timestamps. Every job records full provenance (encoder version, container hash, parameters, input/output hashes) in append-only audit logs. Full architecture: strict-broker reference.
05Do you support GPU acceleration?+
Yes. NVIDIA T4 and A10 are production-tested for NVENC encoding. GPU pools run as a separate worker pool from CPU pools — you choose which workloads dispatch to which based on quality vs throughput trade-offs. For premium VOD where quality matters most, libx264/libx265 on CPU is typically the right call. For high-throughput live or archive workloads, GPU is 5-20× faster per dollar.
06How does MpegFlow handle 4K and HDR transcoding?+
4K transcoding works today with both H.264 and HEVC encoders, on CPU pools or NVIDIA GPU pools. HDR10 and HLG metadata are preserved through the HEVC pipeline; HDR-aware ABR ladders generate SDR fallback renditions automatically when configured. Dolby Vision (profile 5 and profile 8.4 backward-compatible) is on the roadmap for 2026 Q4. For premium VOD pipelines that require HDR output validation, an extra QC stage in the workflow DAG runs HDR metadata checks before package. The end-to-end shape is documented in the broadcast-grade VOD architecture.
07Does MpegFlow expose a REST API or CLI for CI/CD integration?+
Yes. The same DAG workflow that powers the runtime is REST-callable today, with gRPC for typed integrations and a CLI shipping alongside the encoder MVP. Webhooks (HMAC-SHA256 signed with timestamp replay-prevention) deliver job lifecycle events to your CI/CD pipeline. The MpegFlow CLI follows kubectl conventions: declarative YAML or JSON workflow definitions, idempotent apply, per-environment overrides, and dry-run support. Most teams integrate MpegFlow into existing GitOps workflows by checking workflow definitions into the same repo as their application code.
08How do I monitor MpegFlow jobs in Prometheus, Datadog, or Grafana?+
MpegFlow exports metrics in OpenMetrics format that Prometheus scrapes natively; Datadog ingests via its OpenMetrics check; Grafana dashboards consume either source. Job-level metrics include per-stage duration, retry counts by failure class, queue depth per pool, encoder utilization, and per-tenant cost breakdown. Distributed traces export via OpenTelemetry to any OTLP-compatible backend (Jaeger, Tempo, Honeycomb, Datadog APM). The append-only audit log streams as structured JSON events to any HTTP-receiving sink — your SIEM, log aggregator, or webhook.
03Deployment.
01Can I self-host MpegFlow?+
Yes — and the same binary runs as SaaS or self-hosted, so validating on managed and graduating to self-hosted is a config change rather than a rewrite. Self-hosted distribution productizes in 2026 Q4 with Helm chart + signed bundle. Air-gapped deployments are supported for compliance-restricted workflows. Self-hosted licensing is flat-fee per cluster — your costs converge to your hardware bill at scale instead of growing linearly per-minute. The full economics: self-hosted vs managed.
02Can I migrate from AWS Elemental MediaConvert to MpegFlow?+
Yes — and we map your existing job templates to MpegFlow workflow DAGs as part of design-partner onboarding. The bulk of MediaConvert deployments translate cleanly: ABR ladder definitions, output groups, and rendition matrices map to DAG stages with the same parameter shape. Audio normalization (loudness, bitrate, channel layout) and caption handling map to dedicated stages. Two MediaConvert patterns need attention during migration: queue priority flags translate to MpegFlow pool-based isolation (more flexible but conceptually different), and S3 trigger wiring translates to webhook events (your existing Lambda receivers can stay in place with minor signature-verification changes). The full migration matrix is on the AWS MediaConvert comparison page.
03Is MpegFlow Kubernetes-native? What about multi-region?+
MpegFlow runs Kubernetes-native: API tier scales via HPA, worker pools autoscale via KEDA based on Redis queue depth, dedicated pools are managed by the MpegFlow Operator with leader election. Multi-region failover is supported with active-active or active-passive patterns, async storage replication, and multi-CDN routing. References: K8s + KEDA deployment and multi-region failover.
04Commercial.
01What's the pricing for MpegFlow?+
During beta, the cohort runs without billing while we tune the encoder MVP. At GA we ship a public rate card. Self-hosted licensing will be flat-fee per cluster — not per-minute, so cost converges to your hardware bill at scale instead of growing linearly with volume, which is the math that makes managed services uncomfortable for high-volume workloads.
02When will MpegFlow be SOC 2 Type II compliant?+
The SOC 2 Type II audit window opens 2026 Q4. We are operating to SOC 2 controls today; the formal Type II report will issue after the standard 6-month observation period. Until that report is in hand, design partner deployments operate under bilateral NDA + DPA. If your procurement requires Type II report before vendor onboarding, we are honest about where we are; the design partner program is the right path forward. Full compliance posture lives at /trust.
03Is MpegFlow GDPR compliant? What about HIPAA?+
GDPR: yes — EU-region deployment is available, sub-processors are publicly disclosed at /trust, the DPA template is available on request, and Article 28 commitments are matched. HIPAA: not by default, but our self-hosted pattern with a dedicated cluster plus BAA-signed sub-processors is HIPAA-eligible for healthcare-adjacent video (medical training, telehealth recordings, clinical archives). Talk to us during onboarding for the specific deployment shape that satisfies your compliance officer; the controls we operate to today and our SOC 2 Type II audit timeline are documented on the trust page.
04What happens to my data if I cancel MpegFlow?+
Because your video data lives in your storage, cancellation does not affect it — we never had it. Your control-plane metadata (jobs, workflows, audit logs) is exported as JSON via the API or the dashboard, and the export format is documented and stable. Self-hosted is even cleaner: cancelling means stopping the license renewal; the binary continues running locally until the license JWT expires (typically a 30-day grace period). There is no proprietary lock-in for your media: the audit trail format is documented, the workflow DAG is JSON, and the FFmpeg invocations are reproducible standalone if you ever need to walk away from MpegFlow entirely.
05How does MpegFlow compare to Bitmovin, AWS Elemental, or Mux?+
Each tool has different strengths. AWS Elemental MediaConvertwins for AWS-native ecosystems and "submit job, get output" workflows. Bitmovin wins for production AV1, deep DRM coverage, and enterprise procurement maturity. Muxwins for developer ergonomics and bundled streaming + analytics. MpegFlow wins for declarative pipelines (DAG runtime), multi-cloud or self-hosted parity, and an audit trail that's the primary data structure rather than a feature on top. We also have an honest Bitmovin vs Mux head-to-head for teams choosing between those two.
05Get started.
01What's included in the beta cohort?+
Beta members are first in line for the encoder MVP shipping this quarter — the FFmpeg pipeline, ABR ladder generation, S3 push, public REST/gRPC API, and webhook integration. There is no card required and no console waiting. You get an email when your slot opens. The site has a public roadmap at /architectures and across the blog that describes what is built versus what is planned, so you can plan against reality.
02What's the design partner program?+
A small cohort (3-5 teams) of broadcast and OTT engineering teams deploying MpegFlow ahead of GA. Design partners get free access during beta, white-glove onboarding from the founder, real influence on roadmap, and a direct line to engineering. In return: a real production workload, biweekly feedback check-ins, and permission for an eventual reference once you are happily deployed. Apply via the design partner program pageor the contact form with topic "Beta access".
03How do I get started?+
Three paths depending on team shape. Engineers exploring the space: read the architectures and blog. Teams ready to evaluate: book a demo via the contact form. Engineering teams at broadcasters or OTT operators wanting to deploy ahead of GA: apply to the design partner program. All three paths are linked from the landing page.
Question not answered?
Ask us directly.
Real human reading the contact form, replying within 24 hours on weekdays. Most questions get a faster answer in a 20-minute call — link to schedule one in the reply.