Per-title encoding is the technique Netflix pioneered for matching encoding decisions to content characteristics rather than using one-size-fits-all bitrate ladders. The basic insight: a high-motion sports clip needs different encoding parameters than a static talking-head clip to achieve the same perceptual quality. Encoding both at the same bitrate wastes bandwidth on the easy content and starves the hard content. Per-title encoding analyzes each content asset and tunes the encoding decisions per asset, often saving 20-40% bandwidth at equivalent perceptual quality. This page is the engineering reference.
What per-title encoding is
Traditional ABR ladder design picks a fixed set of (resolution, bitrate) pairs that everyone uses for all content. Per-title encoding picks the (resolution, bitrate) ladder dynamically based on the specific content being encoded.
The basic per-title workflow:
- Analyze — encode the source content at multiple bitrates, measure quality (typically VMAF) at each.
- Choose — select bitrates that hit target quality thresholds (e.g., the bitrate at which VMAF first crosses 93 = top tier; the bitrate that crosses 85 = mid tier; etc.).
- Encode — produce final ABR ladder using the chosen bitrates.
For easy content (talking heads, animation, low motion), the chosen bitrates are lower than the universal-ladder defaults — savings without quality loss. For hard content (sports, high-motion action, lots of grain), the chosen bitrates are higher — preserved quality on content where the universal ladder would have under-budgeted.
The aggregate benefit: same perceptual quality as universal ladder at meaningfully lower bandwidth (Netflix's reported number is ~20% on average; specific content can see 40%+).
Netflix's per-title methodology
Netflix published their original per-title methodology in a 2015 blog post. The core procedure:
- Encode at multiple bitrates spanning the expected quality range (e.g., 200 kbps to 5 Mbps).
- Compute VMAF at each bitrate.
- Plot bitrate vs VMAF as a per-title rate-quality curve.
- Identify the "convex hull" — bitrates that maximize quality at minimum bandwidth.
- Pick ladder rungs from the convex hull at target VMAF values.
Netflix evolved this approach over time:
- Per-title (2015) — one analysis pass per asset.
- Per-shot (2017) — analyze each shot separately, allowing different encoding decisions for shots within an asset. Maximum granularity but operationally expensive.
- Dynamic Optimizer (2020s) — sophisticated optimization that combines per-shot analysis with global bandwidth/quality optimization.
The progression is increasing operational complexity for diminishing additional savings. Per-title gets most of the benefit; per-shot gets a bit more; Dynamic Optimizer gets a bit more again. Most production deployments stop at per-title.
The encoding cost tradeoff
Per-title encoding requires multiple encoding passes per asset:
- Analysis encodes — 5-10 different bitrates to characterize the rate-quality curve.
- Final ladder encodes — the production ladder rungs.
Total compute: 2-3x what universal-ladder encoding requires. The bandwidth savings (20-40%) usually justify this for streaming services with significant bandwidth bills, but the ROI math depends on:
- Streaming bandwidth cost — at $0.05/GB CDN cost, 30% bandwidth savings on 100 PB/year of streaming = $1.5M/year saved. Encoding compute at $0.10/CPU-hour for the additional analysis = $50K/year extra. Strong ROI.
- Streaming bandwidth cost (lower) — at $0.005/GB (negotiated CDN tier), the savings shrink and the math gets tighter.
- Content reuse — content encoded once and streamed many times gets full ROI; content streamed rarely doesn't.
- Catalog turnover — content that ages out quickly limits the window for amortizing encoding cost.
For VOD services with high streaming volume per asset, per-title is strongly justified. For low-volume archives or transient content, the ROI math can favor universal ladders.
Implementation patterns
The actual implementation of per-title varies in sophistication:
Two-pass per-title
Simplest approach: analyze at a few bitrates, pick ladder bitrates, encode the production ladder. Total compute roughly 1.5-2x universal-ladder.
Convex-hull per-title
Netflix's published approach: analyze at many bitrates, fit the rate-quality curve, identify the convex hull, pick ladder rungs from the hull. More compute (2-3x), more thorough optimization.
Per-shot
Analyze and optimize per-shot rather than per-asset. Requires shot detection (typically scene-change detection), per-shot rate-quality analysis, per-shot ladder decisions, and packaging that can stitch the per-shot encodes back into a continuous stream. Significantly more complex; ~5x universal-ladder compute.
Per-shot with dynamic optimization
The frontier of complexity. Considers cross-shot quality consistency, viewer experience over time, bandwidth budget allocation across the full asset. The provenance of the more sophisticated work is mostly Netflix Dynamic Optimizer publications and Bitmovin/Mux/Cloudflare Stream productions.
For most pipelines starting per-title, the convex-hull per-title approach is the sweet spot. It gets ~80% of the benefit at ~50% of the operational complexity of per-shot.
VMAF as the perceptual signal
Per-title encoding's whole logic depends on a quality metric that correlates with perception — otherwise the bitrate selection is optimizing for the wrong thing. PSNR-based per-title saves bandwidth but can produce bitrate selections that look worse than universal ladders to actual viewers.
VMAF is the standard signal for per-title encoding in 2026. The threshold-based selection (e.g., "find the bitrate at which VMAF crosses 93") gives perceptually-aligned ladder rungs.
For higher precision, some implementations use:
- VMAF-NEG — better calibration at lower quality, useful for low-tier ladder selection.
- VMAF-4K — for 4K content, accounts for viewing distance differences.
- Multi-metric — VMAF for perceptual, PSNR/SSIM as cross-validation.
The threshold values matter and vary by content tier:
- 4K top tier — VMAF 95+ target.
- HD top tier — VMAF 93+ target.
- HD mid tier — VMAF 88+ target.
- HD floor tier — VMAF 80+ target.
- Mobile-only floor — VMAF 70+ target.
These thresholds are content-aware service-level decisions; they should be calibrated against subjective quality testing on your actual content.
Operational considerations
The things that matter for per-title in production:
- Compute budget — analysis encodes are expensive. For a service encoding 1000 hours of new content per day, per-title adds significant compute cost. Plan capacity accordingly.
- Storage of analysis encodes — analysis encodes are typically discarded after the production ladder is determined. Storage is transient but non-zero.
- Latency to publish — per-title takes longer than universal ladder encoding. For VOD with patient publication windows (24+ hours), this is fine. For live content publishing within minutes of completion, per-title may not fit.
- Catalog reprocessing — when you change your VMAF target thresholds, you need to re-process the catalog with the new targets. Old analysis data may or may not be reusable depending on what changed.
- Cross-encoder consistency — if you change encoders (x265 to SVT-AV1, etc.), per-title decisions made with the old encoder are no longer valid. New analysis required.
- Quality validation — per-title decisions should be validated against actual subjective quality (golden eyes review on a sample) periodically. Trusting VMAF-driven decisions blindly across years can drift if content patterns evolve.
Per-title for live content
Per-title doesn't apply directly to live encoding — there's no analysis window before the content needs to ship. However, related techniques exist:
- Per-content-class — categorize content (sports, news, talk show, etc.) and apply class-specific ladder decisions. Less granular than per-title but applicable to live.
- Per-shot for live — shot-level analysis with a few-second analysis window. Adds latency but possible for tolerant live use cases.
- Dynamic rate allocation — within an established ABR ladder, allocate bitrate per-segment based on content complexity. Keeps the ladder structure but tunes within it.
Most live workflows use universal ladders calibrated for the typical content of the workflow rather than per-title. The per-title approach is primarily VOD territory.
Per-title vs adaptive bitrate ladder design
Per-title and ABR ladder design interact:
- Universal ladder design — what bitrates do you offer when not doing per-title? This determines the baseline against which per-title savings are measured.
- Per-title within ladder — per-title chooses bitrates from a candidate pool defined by ladder design.
- Per-title overrides ladder — more sophisticated per-title can pick arbitrary bitrates rather than from a fixed ladder.
The decision: start with a thoughtful universal ladder; layer per-title on top to optimize within that structure; consider per-shot only when per-title savings are tapping out.
What MpegFlow does with per-title encoding
Per-title encoding runs as a multi-stage workflow in MpegFlow's DAG runtime. The partitioner splits the analyze-then-encode flow into discrete stages with cross-stage data flow that wires upstream output (probe metadata, analysis encode results, VMAF scores) into downstream encode parameters. Each stage runs on the appropriate StageExecutor (FfprobeExecutor for the probe, FfmpegExecutor for analysis and production encodes, the quality-analysis node for VMAF), with per-stage retry handling transient failures without restarting the upstream work.
The workflow stages today:
- Probe —
FfprobeExecutorextracts source metadata; output flows into downstream stages via cross-stage data flow. - Analysis encode (parallel) — multiple low-fidelity encodes at the analysis bitrate range; the partitioner runs these in parallel as sibling stages.
- VMAF compute —
libvmaffilter via the quality-analysis node measures each analysis encode against source. - Ladder selection — operator-configured VMAF thresholds select the ladder rungs from analysis output. Today this lands as analysis output an operator reviews; fully-automatic enforcement (decision node that selects without review) is on the roadmap.
- Production encode — encodes the production ladder at the selected rungs, again as parallel rendition stages.
- Package and deliver — standard packaging stage downstream of all renditions.
Customers with high-volume content where per-title pays off configure this as the default; customers with lower-volume or transient content stay on universal ladders.
For customers evaluating per-title for the first time, we typically run a side-by-side analysis on representative content (encode the same asset with universal ladder and per-title; compute VMAF and bandwidth costs; project annual savings vs additional compute cost). The ROI math is service-specific; the analysis takes a few days and informs whether to roll out per-title broadly.
The strict-broker security model handles per-title workflows the same as standard encoding workflows — workers receive content via short-lived presigned URLs, the analysis encodes are produced and consumed within the workflow boundary, and final ladder encodes follow the standard packaging path. Per-title doesn't change the security posture; it changes the encoding decisions.
If you're considering per-title encoding and want a sounding board on whether it's worth the operational complexity for your use case, that's a conversation we run regularly. The answer depends on your bandwidth costs, content characteristics, and operational tolerance for the additional pipeline complexity. For high-volume premium streaming, per-title is almost always worth it; for everything else, the answer is "depends."