MpegFlowBlogBack to home
Topics · MpegFlow

The technical territory video infrastructure has to cover.

Codecs, streaming protocols, DRM, quality measurement, encoding strategy, color and HDR, captions and timed metadata — the topics every video team ends up reasoning about. Deep technical reference, written by engineers for engineers. 61 topics today, growing as we ship more. See also the reference architectures for the deployment side, and the FFmpeg recipes for working invocations.

01Codecs.

Video and audio codec deep-dives — AV1, VVC, HEVC, H.264, AAC, Opus.

  • Codecs
    Mezzanine codecs — ProRes, DNxHD, DNxHR, and the editorial workflow tier
    Practical reference on mezzanine codecs — Apple ProRes profiles, Avid DNxHD/DNxHR, when to use mezzanine vs delivery codecs, and pipeline integration for editorial-to-streaming workflows.
  • Codecs
    HEVC HDR10 encoding configuration — x265 parameters that get HDR right
    Practical guide to HEVC HDR10 encoding with x265 — Main 10 profile, color signaling, master display SEI, MaxCLL/MaxFALL, the complete CLI configuration that produces conformant HDR10 output.
  • Codecs
    FDK-AAC vs ffmpeg native AAC — encoder selection for streaming pipelines
    Practical comparison of FDK-AAC and ffmpeg's native AAC encoder — quality at common bitrates, licensing constraints, build configuration, VBR vs CBR, when each is the right choice.
  • Codecs
    AV1 encoding economics — when AV1 actually saves money vs HEVC
    Practical analysis of AV1 vs HEVC streaming economics — bandwidth savings vs encode compute cost, hardware AV1 break-even, the ROI threshold where AV1 pays off.
  • Codecs
    SVT-AV1 preset tuning — preset 4 vs 6 vs 8 vs 10 in production
    Practical guide to SVT-AV1 preset selection — quality vs encoding-time tradeoff at each preset, VMAF measurements, when to use preset 4 vs 6 vs 8 vs 10 vs 12 in production.
  • Codecs
    Opus — the audio codec for live, voice, and any pipeline that values latency
    Practical reference on the Opus audio codec — architecture, libopus production tuning, low-latency vs music-quality tradeoff, container support, and when Opus beats AAC.
  • Codecs
    AAC — the audio codec every video pipeline ships, and what to know about it
    Practical reference on AAC for video pipelines — AAC-LC vs HE-AAC vs xHE-AAC, FDK-AAC vs ffmpeg native encoder, bitrate tuning, multi-channel encoding, and container compatibility.
  • Codecs
    VP9 — Google's pre-AV1 codec, and why most teams skip it now
    Practical reference on VP9 — codec mechanics, libvpx-vp9 production realities, the YouTube-driven hardware install base, and the awkward middle position between H.264, HEVC, and AV1.
  • Codecs
    H.264 (AVC) — the codec that won, and is still winning two decades later
    Practical reference on H.264/AVC in 2026 — why it's still everywhere, x264 production tuning, the codec everyone targets first, and when H.264 is the only right answer.
  • Codecs
    HEVC (H.265) — the codec everyone uses and nobody loves
    Practical reference on HEVC/H.265 — codec mechanics, x265 in production, hardware encode/decode maturity, the patent licensing mess, and why it's still the right 2026 answer.
  • Codecs
    VVC (H.266) — the codec MPEG built and the industry hasn't deployed
    Practical reference on Versatile Video Coding (H.266) — compression efficiency, encoder ecosystem (VVenC), the patent-pool situation, hardware decode timeline, and where VVC stands against AV1.
  • Codecs
    AV1 codec — what it is, where it wins, what it costs
    Practical reference for engineers shipping AV1 in production — what AOMedia got right, encoder economics, hardware decode timeline, when AV1 beats HEVC and when it doesn't.
02Containers.

Container formats — MP4/fMP4, MPEG-TS, MKV/WebM, MOV.

  • Containers
    Fragmented MP4 (fMP4) segment construction — moof, mdat, tfdt, and styp
    Practical reference on fragmented MP4 segment internal structure — moof + mdat box pairs, init segment vs media segments, tfdt absolute timing, styp self-identification, sidx for byte-range fetching.
  • Containers
    MP4 faststart — moov at front, why it matters, and how to set it
    Practical reference on MP4 faststart — moov at end vs front, ffmpeg movflags=+faststart, why progressive streaming requires it, the one-pass vs two-pass tradeoff.
  • Containers
    MXF — the broadcast and post-production container nobody outside the industry uses
    Practical reference on MXF (Material Exchange Format) — SMPTE 377, operational patterns (OP1a, OP1b, OPAtom), codec support, broadcast and post-production use, vs MP4 / MOV.
  • Containers
    MOV (QuickTime) — Apple's container format and the editorial-workflow standard
    Practical reference on Apple QuickTime MOV — atom-based structure, the relationship to MP4, ProRes/DNxHR delivery, edit lists and timecode, when to use MOV vs MP4.
  • Containers
    MKV and WebM — the open container format and its browser-friendly profile
    Practical reference on Matroska and WebM containers — EBML format, segment/cluster/block structure, codec flexibility, browser support, when MKV/WebM is the right choice vs MP4.
  • Containers
    MP4 / fMP4 — the universal video container and the streaming-friendly fragmented variant
    Practical reference on MP4 — ISO Base Media File Format, box structure (ftyp/moov/mdat), fragmented MP4 for streaming, codec compatibility, CMAF derivation, HLS and DASH integration.
03Protocols.

Streaming protocols — HLS, DASH, CMAF.

  • Protocols
    HLS segment duration — picking the right TARGETDURATION for your use case
    Practical guide to HLS segment duration selection — Apple's 6-second recommendation, GOP alignment, latency vs request overhead tradeoff, LL-HLS partial segment relationship.
  • Protocols
    HLS X-TIMESTAMP-MAP — webvtt subtitle timing alignment for HLS
    Practical reference on the HLS X-TIMESTAMP-MAP header — the MPEGTS:LOCAL syntax, why webvtt subtitles need it, common timing bugs, generation in pipelines.
  • Protocols
    CMAF — the segment format that ended the HLS-vs-DASH duplicate-encoding problem
    Practical reference on Common Media Application Format (CMAF) — fragmented MP4 structure, common encryption (CENC), unified HLS+DASH delivery, chunked CMAF for low-latency, and operational benefits.
  • Protocols
    DASH — the standardized streaming protocol that won the spec war and lost the install base
    Practical reference on MPEG-DASH — MPD manifest structure, segment formats, dash.js and Shaka Player, DASH-IF profiles, multi-period streaming, and DASH vs HLS in 2026.
  • Protocols
    HLS — the protocol that won the streaming wars and what to know about it
    Practical reference on HTTP Live Streaming — m3u8 manifests, segment formats, ABR adaptation, low-latency HLS, FairPlay DRM integration, and when HLS is the right protocol.
04Quality.

Video quality measurement — VMAF, PSNR, SSIM, per-title encoding.

  • Quality
    Golden-eyes video review — when automated metrics aren't enough
    Practical guide to golden-eyes panel review for video quality — content selection, review environment standards, methodology, reviewer training, when to use it vs automated metrics.
  • Quality
    ABR ladder VMAF calibration — finding the right bitrate per rung for your content
    Practical guide to ABR ladder calibration via VMAF — representative content selection, per-rung VMAF target setting, bitrate sweep procedure, identifying ladder gaps, iteration.
  • Quality
    VMAF cross-validation with MOS — calibrating quality metrics against viewers
    Practical guide to validating VMAF against subjective MOS testing — ITU-R BT.500 methodology, golden viewer panel selection, when VMAF disagrees with MOS, calibration procedures.
  • Quality
    BD-rate calculation — how to compare codecs and encoder configurations
    Practical guide to Bjontegaard delta-rate calculation — the math, multi-bitrate encoding procedure, BD-rate VMAF vs PSNR, libbjontegaard tooling, interpreting results.
  • Quality
    ABR ladder design — the engineering decisions in adaptive bitrate streaming
    Practical reference on ABR ladder design — rung count, resolution choices, codec choices per rung, bitrate calibration, audience-mix-driven design, and the tradeoffs that matter in production.
  • Quality
    Per-title encoding — Netflix's content-aware encoding model and how to run it
    Practical reference on per-title encoding — Netflix's approach, the analyze-then-encode workflow, content-aware bitrate selection via VMAF, operational tradeoffs, and ROI math.
  • Quality
    SSIM — the structural similarity metric and its multi-scale variants
    Practical reference on the Structural Similarity Index — luminance, contrast, and structure components, MS-SSIM and SSIMplus variants, ITU-T standardization, and SSIM vs PSNR vs VMAF.
  • Quality
    PSNR — the classic quality metric, why it persists, and where it fails
    Practical reference on Peak Signal-to-Noise Ratio — mathematical definition, why PSNR persists as a metric despite perceptual limitations, where it still serves, and where it misleads.
  • Quality
    VMAF — Netflix's quality metric and the modern reference for video quality measurement
    Practical reference on Video Multi-Method Assessment Fusion — Netflix's perceptual quality metric, training methodology, libvmaf usage, BD-rate calculation, and the limits of automated metrics.
05Encoding.

Encoding strategy — hardware acceleration, two-pass, CRF, presets.

  • Encoding
    Watermarking and overlays — burning logos, tags, and identifiers into video for streaming
    Practical reference on video watermarking and overlay rendering — visible vs forensic watermarking, ffmpeg overlay filter, positioning conventions, multi-resolution scaling, and pipeline integration.
  • Encoding
    Deinterlacing — converting interlaced source to progressive output for streaming
    Practical reference on deinterlacing — yadif vs bwdif, motion-adaptive algorithms, telecined content (IVTC), and broadcast-to-streaming pipeline integration.
  • Encoding
    Encoder version pinning and regression testing — production reproducibility
    Practical guide to pinning encoder versions and detecting regressions — version selection, building from source, regression testing procedures, A/B test framework, rollback.
  • Encoding
    FFmpeg filter_complex patterns — branching, merging, and multi-output graphs
    Practical guide to FFmpeg filter_complex graphs — split for branching, concat/hstack/overlay for merging, multi-input multi-output patterns, common pipeline use cases.
  • Encoding
    ffprobe stream inspection — extracting media info for pipeline automation
    Practical guide to ffprobe — common inspection patterns, JSON output for scripting, stream-specific inspection, frame and packet analysis, production usage in pipelines.
  • Encoding
    FFmpeg CRF tuning by content type — picking the right CRF for your content
    Practical guide to CRF tuning for different content types — talking heads, sports, animation, screen content, mobile-only. CRF defaults per encoder and content-aware adjustments.
  • Encoding
    Loudness normalization — LUFS, EBU R128, ATSC A/85, and getting audio levels right
    Practical reference on loudness normalization — LUFS/LKFS units, EBU R128 (-23 LUFS), ATSC A/85 (-24 LKFS), streaming targets (-16 LUFS), measurement, true peak limiting, FFmpeg loudnorm.
  • Encoding
    Resolution and aspect ratio — what 1080p, 4K, and 16:9 actually mean
    Practical reference on video resolution and aspect ratio — 480p/720p/1080p/1440p/4K UHD/DCI 4K/8K standards, pixel aspect ratio vs display aspect ratio, anamorphic, and ABR ladder design.
  • Encoding
    Frame rate — 24, 30, 60 fps and the fractional rates that broke video
    Practical reference on video frame rates — 24/25/30/50/60/120 fps standards, the NTSC fractional rates (23.976/29.97/59.94), conversion, VFR vs CFR, and encoding implications.
  • Encoding
    FFmpeg — the multimedia framework that runs nearly all video infrastructure
    Practical reference on FFmpeg — project structure, libav* libraries, codec/format support, filter graphs, hardware acceleration, ffprobe and ffplay, the universal video tool.
  • Encoding
    Frame types — I-frames, P-frames, B-frames, and how they shape encoding
    Practical reference on video frame types — intra-coded I-frames, predicted P-frames, bi-directional B-frames, B-pyramid, decode vs display order, and the latency implications of B-frames.
  • Encoding
    GOP and keyframe interval — what they are, why they matter, and how to set them
    Practical reference on Group of Pictures and keyframe interval — closed vs open GOP, IDR keyframes, segment alignment for ABR streaming, the latency vs compression tradeoff.
  • Encoding
    Encoder presets — what x264 / x265 / SVT-AV1 presets actually mean
    Practical reference on encoder presets — preset philosophy across encoders, what each preset enables, picking presets for VOD vs live, the speed-quality curve, and benchmark calibration.
  • Encoding
    Two-pass encoding — when it matters and when single-pass is fine
    Practical reference on two-pass video encoding — how it works, when target-bitrate VBR benefits, why CRF doesn't need it, the wall-time tradeoff, and the production decision.
  • Encoding
    Rate control — CRF, CBR, VBR, capped CRF, and which to choose
    Practical reference on video encoder rate control — CRF (constant rate factor), CBR (constant bitrate), VBR (variable bitrate), capped CRF for streaming, and which mode fits VOD vs live.
06Color.

Color and HDR — HDR10, HDR10+, Dolby Vision, HLG, color spaces.

  • Color
    10-bit HEVC from 8-bit source — when it helps and when it's pointless
    Practical guide to encoding 10-bit HEVC from 8-bit source — encoder math benefits, banding reduction, when 10-bit is mandatory (HDR), bandwidth implications, hardware decoder support.
  • Color
    Limited vs full range color — TV range vs PC range and the conversion bugs
    Practical reference on video color range — TV range (16-235 for 8-bit), PC range (0-255), historical reasons, signaling, range mismatch artifacts, ffmpeg conversion.
  • Color
    HDR to SDR conversion — the full pipeline from PQ to BT.709
    Practical guide to HDR-to-SDR conversion — linearization, color primaries conversion, tone mapping, gamut handling, re-quantization, ffmpeg command construction, validation.
  • Color
    Tone mapping — converting HDR to SDR and adapting HDR for different displays
    Practical reference on tone mapping — HDR-to-SDR conversion algorithms (Reinhard, Hable, ACES, BT.2390), HDR-to-HDR adaptation, static vs dynamic, display-side vs encoder-side, FFmpeg implementation.
  • Color
    Chroma sub-sampling — 4:4:4 vs 4:2:2 vs 4:2:0 and what it means for video
    Practical reference on chroma sub-sampling — the 4:4:4 / 4:2:2 / 4:2:0 notation, the bit budget math, codec profile support, where each is used, and the visual impact on natural vs graphic content.
  • Color
    Bit depth — 8-bit vs 10-bit vs 12-bit and what it means for video
    Practical reference on video bit depth — banding in 8-bit, why 10-bit is HDR-mandatory, encoder and hardware decoder support, bandwidth implications, and bit-depth selection per use case.
  • Color
    Color spaces — Rec.709 vs Rec.2020 vs DCI-P3 and what they mean for video
    Practical reference on video color spaces — Rec.709, BT.2020, DCI-P3 primaries, white points, gamut comparison, conversion considerations, and signaling in container metadata.
  • Color
    HLG (Hybrid Log-Gamma) — the broadcast-compatible HDR format
    Practical reference on Hybrid Log-Gamma — BBC/NHK joint development, scene-referred transfer function, backward compatibility with SDR, broadcast use cases, and where HLG fits next to HDR10.
  • Color
    HDR10 — the open HDR baseline every premium video pipeline ships
    Practical reference on HDR10 — ST 2084 PQ transfer function, BT.2020 color space, static metadata, signaling in HEVC and AV1, the universal HDR baseline that every HDR-capable device decodes.
07Captions.

Captions and timed metadata — WebVTT, IMSC, 608/708, SCTE-35.

  • Captions
    Burn-in vs soft subtitles — when to render captions into video vs deliver as separate tracks
    Practical guide to subtitle delivery — burn-in (rendered into video) vs soft (separate track), use cases, ffmpeg implementation, player compatibility, cost tradeoffs.
  • Captions
    CEA-608/708 to WebVTT conversion with ccextractor — broadcast caption migration
    Practical guide to converting CEA-608/708 captions from broadcast sources to WebVTT for streaming — ccextractor usage, SEI message extraction, timing alignment, multi-service handling.
  • Captions
    CEA-608 and CEA-708 — broadcast caption standards in a streaming world
    Practical reference on CEA-608 and CEA-708 closed caption standards — broadcast caption legacy, embedding in MPEG-TS and SEI messages, conversion to WebVTT and IMSC for streaming.
  • Captions
    TTML and IMSC — XML-based timed text for premium video and broadcast
    Practical reference on TTML (Timed Text Markup Language) and the IMSC profile — XML-based caption format, rich styling capabilities, broadcast and premium streaming use cases, vs WebVTT.
  • Captions
    WebVTT — the W3C caption format every browser speaks
    Practical reference on WebVTT — file structure, styling, positioning support, integration with HLS and DASH, vs SRT and TTML, and when WebVTT is the right caption format.
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy