Interlaced video is a legacy of analog television — alternating fields of odd and even lines, captured 60 times per second to give the impression of 30 fps motion at half the bandwidth. Streaming pipelines deliver progressive video, which means broadcast-source content arriving as 1080i59.94 or 480i29.97 has to be deinterlaced before encode. The conversion is mechanical when you know what you're doing and produces silent quality bugs when you don't. This page is the engineering reference for deinterlacing in modern streaming pipelines.
What interlaced video is
In interlaced video, each "frame" is composed of two fields captured at different points in time:
- Top field (odd-numbered lines: 1, 3, 5, ...).
- Bottom field (even-numbered lines: 2, 4, 6, ...).
A 1080i59.94 stream is 59.94 fields per second, or ~29.97 frames per second when fields are paired into frames. Each field is a snapshot of a different moment, so combining them into a single frame produces visible "combing" on motion (alternating-line offset between the two captured moments).
Interlaced video originated as a bandwidth optimization for 1930s-1990s analog TV: capture 60 fields per second instead of 30 full frames, and viewers perceive smoother motion at half the broadcast cost. Digital broadcast inherited the format; ATSC 1.0 and DVB-T both transmit interlaced content widely. The streaming era is uniformly progressive — every modern player, codec, and CDN convention assumes progressive frames — so any interlaced source has to be converted.
Common interlaced source formats
In 2026 streaming pipelines, interlaced sources arrive from:
- 1080i59.94 — most common: HD broadcast contribution feeds, US sports, news.
- 1080i50 — European HD broadcast (PAL countries).
- 576i25 — SD PAL broadcast (legacy archive content from European sources).
- 480i29.97 — SD NTSC broadcast (legacy archive content from US sources).
- MXF and IMX content — often interlaced from older broadcast post-production workflows.
Interlaced flag in the codec / container:
- MPEG-2 —
picture_structurefield signals frame vs field coding. - H.264 —
pic_structSEI message andfield_pic_flagin slice header. - HEVC — interlaced support is profile-specific (Main 10 supports it; some profiles don't).
For ingest, ffprobe reveals interlacing:
ffprobe -v error -select_streams v:0 -show_entries stream=field_order input.mxf
field_order=tt is top-field-first; bb is bottom-first; progressive means the stream is already deinterlaced.
Single-rate vs double-rate deinterlacing
Two families of deinterlacing produce fundamentally different output:
Single-rate (frame-rate-preserving):
- Input: 1080i59.94 (59.94 fields/s).
- Output: 1080p29.97 (29.97 frames/s).
- Approach: combine the two fields of each pair into one frame; recover lost vertical detail via interpolation.
- Result: looks like 30 fps progressive video.
Double-rate (field-rate-preserving):
- Input: 1080i59.94 (59.94 fields/s).
- Output: 1080p59.94 (59.94 frames/s).
- Approach: produce one frame per field; interpolate the missing vertical detail per field.
- Result: looks like 60 fps progressive video — smoother motion, more "video-like" appearance.
For sports and live broadcast where 60 fps motion is the editorial intent, double-rate is the right choice; single-rate halves the apparent motion smoothness. For drama and news where 30 fps "film-like" motion is fine, single-rate is acceptable and saves bandwidth. The decision is editorial, not technical — the technical question is whether your pipeline supports both and lets you pick per asset.
Deinterlacing algorithms
Real-world deinterlacing algorithms range from cheap-and-fast to expensive-and-good:
Bob (line doubling):
- For each field, double each line vertically to create a full frame.
- Cheap, fast, ugly. Visible "bobbing" on motion (alternating frames look slightly different).
- Used historically as a cheap fallback. Avoid for any quality-relevant work.
Weave (field combine):
- Combine top and bottom fields directly into a frame.
- Looks fine on still content, produces severe combing on motion.
- Useful only as part of motion-adaptive: weave on stills, do something better on motion.
Linear blend:
- Vertically interpolate missing lines as the average of neighbors.
- Reduces vertical detail (visible blur) but eliminates combing.
- Cheap; quality is mediocre.
Motion-adaptive (yadif and similar):
- Detect motion per pixel; use weave on stills, interpolation on motion.
- Yadif (Yet Another Deinterlacing Filter) is the FFmpeg standard.
- Good quality at modest compute cost. The pragmatic default for most pipelines.
Motion-compensated (bwdif and successors):
- Detect motion vectors and synthesize missing pixels along motion paths.
- bwdif (Bob Weaver Deinterlacing Filter) extends yadif with temporal awareness.
- Better quality on complex motion than yadif. Modest extra compute.
Spatio-temporal (commercial / niche):
- Multi-frame analysis with edge-aware interpolation.
- Found in commercial broadcast tools (Snell Alchemist, Cinnafilm Tachyon).
- Best quality; high compute cost; expensive licensing.
For streaming pipelines, the practical choice is yadif or bwdif at the FFmpeg layer — both are open-source, both produce production-acceptable output, both run at reasonable speeds. Step up to commercial deinterlacers only when content quality demands justify the cost.
ffmpeg deinterlacing examples
Yadif single-rate (most common):
ffmpeg -i input_1080i.mxf -vf "yadif=mode=0" -c:v libx264 -crf 22 output_1080p.mp4
mode=0 is single-rate (one output frame per input frame, dropping half the temporal information).
Yadif double-rate (sports, live):
ffmpeg -i input_1080i.mxf -vf "yadif=mode=1" -c:v libx264 -crf 22 output_1080p60.mp4
mode=1 is double-rate (one output frame per input field).
Bwdif (better quality):
ffmpeg -i input_1080i.mxf -vf "bwdif=mode=0" -c:v libx265 -crf 23 output_1080p.mp4
Same mode=0 / mode=1 semantics as yadif; bwdif is the upgraded version.
Hardware-accelerated (NVIDIA):
ffmpeg -hwaccel cuda -hwaccel_output_format cuda -i input_1080i.mxf \
-vf "yadif_cuda=mode=0" -c:v h264_nvenc output_1080p.mp4
For pipelines with GPU workers, GPU deinterlacing is meaningfully faster than CPU — useful when throughput matters.
Telecined content (3:2 pulldown)
A specific case worth handling correctly: telecined content. This is film material (24 fps) that's been mapped to 30 fps interlaced video using a 3:2 pulldown pattern (some frames duplicated as fields, some interlaced from adjacent frames).
For telecined content, naive deinterlacing produces inferior output. The right approach is inverse telecine (IVTC) — detect the 3:2 pattern, undo it, recover the original 24 fps progressive frames. Then optionally re-encode at 24 fps for streaming.
ffmpeg IVTC:
ffmpeg -i telecined_29.97.mxf -vf "fieldmatch,decimate" -c:v libx264 -crf 22 output_24p.mp4
fieldmatch detects the 3:2 pattern and remixes fields into the original progressive frames; decimate removes duplicate frames from the 30 fps stream, leaving 24 fps.
If you don't IVTC telecined content, you encode at 30 fps with the 3:2 pulldown pattern still in the stream — wasteful (more frames than needed) and visually inferior (pull-down judder is visible). For film content that's been telecined for broadcast, always IVTC before streaming encode.
Detecting telecine vs native interlaced
Telecined and natively-interlaced content require different processing. Telling them apart:
ffmpeg -i input.mxf -vf "idet" -frames:v 200 -f null -
idet analyzes the first 200 frames and reports field-order statistics. Output looks like:
[Parsed_idet_0 @ 0x...] Repeated Fields: Neither: 100 Top: 50 Bottom: 50
[Parsed_idet_0 @ 0x...] Single frame detection: TFF: 50 BFF: 50 Progressive: 0 Undetermined: 100
[Parsed_idet_0 @ 0x...] Multi frame detection: TFF: 50 BFF: 50 Progressive: 0 Undetermined: 100
High counts of "Repeated Fields" suggest telecined content (the 3:2 pattern repeats fields). Even split between TFF and BFF on a clip that should be one consistent field order suggests telecine or mixed content.
For pipelines processing heterogeneous source, run idet analysis at ingest and route content appropriately: native-interlaced → yadif/bwdif; telecined → IVTC.
Interlaced HDR (a real concern)
A specific 2026 case: HDR content from broadcast sources is sometimes still interlaced. Interlaced HDR is technically valid but operationally problematic — most HDR-capable streaming infrastructure assumes progressive content.
If you receive interlaced HDR source, deinterlace before HDR encoding. Yadif or bwdif handle HDR pixel formats (10-bit Rec.2020) correctly when the FFmpeg build is configured properly. Verify the deinterlaced output preserves HDR signaling (colr, mdcv, clli boxes) end-to-end through the pipeline.
Field order errors
A common source of bugs: the source claims one field order (e.g., tt top-field-first) but the actual content is the other. Deinterlacing with the wrong field order produces obvious motion artifacts (objects move in jerky alternating directions).
Symptoms in deinterlaced output:
- Motion appears to "jitter" on a 1-frame cycle.
- Edges look broken on horizontal motion.
- Fast motion looks choppy and offset.
Fix: try the other field order. ffmpeg setfield filter:
ffmpeg -i input.mxf -vf "setfield=tff,yadif=mode=1" output.mp4 # Force top-field-first
ffmpeg -i input.mxf -vf "setfield=bff,yadif=mode=1" output.mp4 # Force bottom-first
For pipelines processing heterogeneous source, automatic field-order detection (yadif=parity=auto) is the default but produces wrong results on a meaningful fraction of content. For high-stakes content, verify field order during ingest QC; for volume content, accept some fraction of field-order failures and have an operational process for re-running with corrected configuration.
Pipeline integration
A typical broadcast-to-streaming pipeline with deinterlacing:
- Ingest broadcast contribution feed (SRT/RIST → MPEG-TS).
- Demux TS to extract video stream.
- Probe with ffprobe to detect interlacing flags and
idetanalysis for telecine. - Deinterlace or IVTC based on probe results:
- Progressive → no-op.
- Native interlaced → yadif/bwdif (single-rate or double-rate per editorial spec).
- Telecined → IVTC (
fieldmatch,decimate).
- Encode progressive output for streaming.
- Package into HLS/DASH.
The deinterlace step is fast for yadif (real-time on commodity hardware); bwdif and IVTC are slightly slower; commercial deinterlacers can be much slower. For live workflows, the deinterlace step has to run in real-time within the latency budget; for VOD, it can take whatever time produces the best quality.
Performance considerations
CPU deinterlacing speeds (ballpark for 1080i59.94 → 1080p, single-rate, on a modern CPU core):
- yadif: ~3-5x real-time per core.
- bwdif: ~2-3x real-time per core.
- Commercial spatio-temporal: 0.3-1x real-time (depends on tool).
For live workflows, single-core deinterlacing is usually sufficient. For high-throughput VOD, parallelize across cores via -threads or run multiple jobs concurrently.
GPU deinterlacing is faster but introduces an upload/download cost (CPU memory → GPU memory and back). For pipelines where the encode also runs on GPU (NVENC), end-to-end GPU deinterlace + encode keeps content on the GPU and amortizes the transfer cost. For pipelines where encode is CPU (x265, SVT-AV1), CPU deinterlace is usually the better choice.
Common deinterlacing bugs
Bug 1: Deinterlacing progressive content.
Running yadif on already-progressive content produces softer output (the algorithm averages neighboring lines unnecessarily). Always check whether deinterlacing is needed; skip the filter when source is already progressive.
Bug 2: Wrong field order.
Discussed above. Manifests as 1-frame jitter on motion.
Bug 3: Telecined content treated as native interlaced.
Yadif on telecined content produces 30 fps output with mushed motion; IVTC produces 24 fps output that looks correct. Tell them apart; route correctly.
Bug 4: Single-rate when double-rate is needed.
Single-rate sports content looks slow and choppy compared to the broadcast 60 fps. Pipelines that flatten everything to single-rate produce inferior sports streaming. Run double-rate for content where field-rate motion matters.
Bug 5: HDR signaling lost in deinterlace.
Some FFmpeg filter chains drop HDR metadata. Verify colr/mdcv/clli boxes survive the deinterlace stage; if not, explicit metadata preservation is needed.
Bug 6: Deinterlace + scale ordering.
Deinterlace before scale, not after. Scaling interlaced content vertically corrupts the field structure; scaling progressive content is fine. Filter order in ffmpeg: -vf "yadif,scale=1920:1080" is right; reversing the order is wrong.
Deinterlacing for archive content
For archive content (decades-old broadcast masters), additional considerations:
- Variable field order across program — some old tape masters have inconsistent field order. Per-segment processing may be needed.
- Source artifacts — analog noise, dropouts, drop-frame errors. Deinterlacing won't fix these; pre-process if needed.
- Aspect ratio handling — older 4:3 content may have anamorphic flags. Verify aspect ratio is correctly preserved.
- Metadata richness — broadcast masters often have rich metadata (closed captions, time code, AS-11 production metadata). Verify each metadata stream survives the deinterlace + transcode stages.
For premium archive workflows, commercial deinterlacers and spatio-temporal algorithms produce meaningfully better output than yadif/bwdif. Whether the cost is justified depends on the content and audience.
Operational considerations
Things that matter for deinterlacing in production:
- Source detection at ingest — automate detection (interlaced vs progressive vs telecined) so the pipeline routes correctly without manual configuration per asset.
- Editorial routing — sports/live get double-rate; drama/news typically get single-rate; document the policy.
- Field-order QC — sample output, verify motion looks correct, escalate field-order failures.
- HDR-aware processing — for HDR-interlaced source, verify metadata preservation through deinterlace.
- Performance budgeting — deinterlace is a step in the pipeline; size compute appropriately.
- Cross-format output — when the pipeline produces multiple resolutions/bitrates, deinterlace once and use the result for all renditions (don't deinterlace per rendition).
What MpegFlow does with deinterlacing
MpegFlow's DAG runtime expresses deinterlacing as part of the FfmpegExecutor stage's filter parameters. FilterParams::Deinterlace is a typed first-class parameter on the encode stage with options for the algorithm (yadif / bwdif) and mode (single-rate / double-rate); the partitioner persists each stage to job_stages with explicit dependency tracking and per-stage retry; sibling cancellation propagates fatal failures across rendition stages.
An FfprobeExecutor stage upstream characterizes the source — field_order, plus telecine detection via idet analysis when the workflow asks for it — and cross-stage data flow wires the probe output into the encode stage's parameter assembly so deinterlacing routing derives from real source characteristics rather than fixed defaults. For ABR ladders, deinterlacing happens once on the upstream source; the resulting progressive frames feed the parallel rendition stages without re-deinterlacing per rung.
For live workflows, deinterlacing runs inline in the live encode stage. For VOD, the deinterlace step is its own logical stage with retry and sibling cancellation, separable for re-execution if downstream encodes need to be re-run.
The strict-broker security model handles deinterlacing like any pipeline payload — workers carry no ambient credentials; content access flows through short-lived presigned URLs scoped per stage; access is disposed on completion.
For customers building broadcast-to-streaming pipelines with interlaced source, the conversation typically focuses on editorial decisions (single-rate vs double-rate per content type), source detection (does your ingest reliably distinguish telecined from native interlaced?), and quality expectations (is yadif quality sufficient, or does archive content need commercial deinterlacers?). The pipeline mechanics are well-understood; the editorial policy is where customer-specific work happens.
The general guidance: deinterlace at ingest before encoding for streaming. Detect telecine and IVTC where applicable. Use double-rate for sports/live, single-rate for drama/news. Yadif or bwdif handles most production content; commercial deinterlacers are for premium archive where quality justifies cost. Never ship interlaced video to streaming audiences in 2026 — the players don't expect it, and the failure modes are user-facing.