MpegFlowBlogBack to home
← Topics·Color

Tone mapping — converting HDR to SDR and adapting HDR for different displays

Practical reference on tone mapping — HDR-to-SDR conversion algorithms (Reinhard, Hable, ACES, BT.2390), HDR-to-HDR adaptation, static vs dynamic, display-side vs encoder-side, FFmpeg implementation.

ByMpegFlow Engineering Team·Color
·May 8, 2026·10 min read·1,937 words
In this topic
  1. What tone mapping is
  2. Why tone mapping is hard
  3. Common tone mapping algorithms
  4. Static vs dynamic tone mapping
  5. Display-side vs encoder-side
  6. FFmpeg tone mapping
  7. libplacebo for advanced tone mapping
  8. Tone mapping quality assessment
  9. When tone mapping is wrong
  10. Operational considerations
  11. What MpegFlow does with tone mapping

Tone mapping is the process of converting between HDR and SDR (or between HDR and a different HDR display capability). It's how a 1000-nit movie looks correct on a 100-nit SDR TV, on a 600-nit consumer HDR TV, and on a 4000-nit reference HDR monitor. Without proper tone mapping, HDR content displays with washed-out shadows, blown-out highlights, or incorrect color rendering on devices that can't reproduce the original mastering range. The algorithm choices and where in the pipeline tone mapping happens determine the quality of the result. This page is the engineering reference.

#What tone mapping is

Tone mapping converts an image with one luminance range to a different luminance range while preserving as much perceptual quality as possible. The two common scenarios:

HDR-to-SDR — an HDR master (PQ-encoded, peak 1000-10,000 nits) needs to be displayed on an SDR display (peak 100 nits). The algorithm compresses the HDR luminance range into the SDR range.

HDR-to-HDR display adaptation — an HDR master (peak 4000 nits) needs to be displayed on a consumer HDR display (peak 600 nits). The algorithm adapts the master to the display's capability.

The fundamental challenge: information has to be discarded. The HDR source has more dynamic range than the SDR target can represent. The question is which information to discard and how to compensate perceptually for what's lost.

A naive approach (linear scaling) compresses everything proportionally, which produces uniformly dim output. Sophisticated tone mapping preserves apparent brightness in mid-tones (where most perceptual detail lives) while gracefully rolling off highlights and shadows.

#Why tone mapping is hard

The non-trivial parts:

Highlight detail vs midtone preservation tradeoff — preserving highlight detail means compressing mid-tones; preserving mid-tone fidelity means clipping highlights. Different content benefits from different choices.

Color shift on tone mapping — naive luminance-only tone mapping shifts color appearance. Saturated reds become orange, blues shift toward cyan. Color-aware tone mapping preserves hue while adjusting luminance.

Spatial vs global tone mapping — global algorithms apply the same curve to every pixel; spatial algorithms vary the mapping by region. Spatial preserves more local contrast but introduces halo artifacts.

Temporal consistency — across video frames, tone mapping must be temporally stable. Per-frame algorithms can introduce flicker; integrated tone mapping with temporal smoothing is more stable but less responsive to scene changes.

Subjective preference — different viewers prefer different tone mapping for the same content. There's no universal "correct" mapping; the goal is "perceptually acceptable" for the target audience.

The combination makes tone mapping a real engineering problem with multiple legitimate solutions.

#Common tone mapping algorithms

The major algorithms in production use:

Reinhard — academic algorithm (Reinhard et al., 2002). Simple sigmoid curve. Computationally cheap. Tends to produce flat-looking output with limited highlight detail. Widely implemented as a baseline; rarely the right choice for premium content.

Hable (Uncharted 2) — game industry algorithm popularized by John Hable's GDC talk on Uncharted 2. Uses a filmic curve that retains contrast better than Reinhard. Common in real-time graphics; viable for video.

ACES (Academy Color Encoding System) — film industry standard. The full ACES system covers source ingestion, working space, and output transforms; ACES output transforms include tone mapping. Default for film/cinema-grade content.

BT.2390 — ITU-R recommendation for HDR-to-SDR (and HDR-to-HDR) tone mapping. Designed for broadcast use. Specifies parameter values for various source/target combinations. The standard for broadcast workflows.

Hable filmic for HDR — Hable's later work on filmic curves adapted for HDR display targets. Used by some game studios and increasingly in video.

Mobius — variant of Reinhard with a smoother roll-off. Produces output with better highlight retention than Reinhard.

clip / linear — naive options. clip just clips out-of-range values to the target maximum (loses all highlight detail above the clip point). linear scales proportionally (results in dim output for HDR-to-SDR).

For 2026 production:

  • BT.2390 — broadcast and reference for HDR-to-SDR.
  • ACES output transforms — film/cinema-grade pipelines.
  • Hable filmic — many streaming services for HDR-to-SDR.
  • Reinhard — baseline / quick-and-dirty.

#Static vs dynamic tone mapping

Static tone mapping uses a single curve applied to the entire content. The curve is calibrated based on the content's mastering range (or the display's capability for HDR-to-HDR adaptation). Same curve applied to every frame.

Pros: simple, predictable, no flicker. Cons: can't adapt to scene-by-scene variations in actual content luminance.

Dynamic tone mapping varies the curve scene-by-scene (or even frame-by-frame) based on actual content characteristics. Implemented at the display side via dynamic metadata (HDR10+, Dolby Vision) or at the encoding/transcoding side via per-segment analysis.

Pros: optimal mapping per scene; better quality on content with significant scene-to-scene variation. Cons: more complex, requires metadata authoring or per-scene analysis, can introduce temporal artifacts if not done carefully.

For HDR-to-HDR display adaptation: dynamic tone mapping (via Dolby Vision or HDR10+ metadata) is the premium experience.

For HDR-to-SDR conversion: static tone mapping with carefully chosen curve is the standard. Per-segment dynamic mapping is rare.

#Display-side vs encoder-side

Tone mapping can happen in different places:

Display-side — the display does the tone mapping based on metadata. The HDR signal is delivered to the display; the display tone-maps to its capability.

Encoder-side / transcoding — the pipeline tone-maps before encoding. The output is SDR (or HDR adapted to a specific target); the display doesn't tone-map further.

Both — for premium streaming, the HDR master is delivered to HDR displays (with display-side tone mapping handling capability differences); a separately-produced SDR variant is delivered to SDR-only audiences (with encoder-side tone mapping baked in).

For pipelines:

  • Encoder-side HDR-to-SDR — when SDR variants need to be produced from HDR masters. Done once per content; SDR delivery is separate from HDR delivery.
  • Display-side HDR-to-HDR — when HDR is delivered universally and displays adapt to their own capability. The pipeline doesn't tone-map; the display does.
  • Both — for content delivered to mixed audiences (HDR + SDR), both happen. Encoder produces SDR variant via tone mapping; HDR variant is delivered as-is with display-side adaptation.

#FFmpeg tone mapping

FFmpeg implements tone mapping primarily via the tonemap filter and zscale filter. A typical HDR-to-SDR pipeline:

ffmpeg -i hdr_input.mp4 -vf \
  "zscale=t=linear:npl=100,format=gbrpf32le,zscale=p=bt709,tonemap=tonemap=hable:desat=0,zscale=t=bt709:m=bt709:r=tv,format=yuv420p" \
  -c:v libx264 -crf 22 sdr_output.mp4

Step by step:

  1. zscale=t=linear:npl=100 — convert to linear light, normalized to peak luminance of 100 nits (the SDR target).
  2. format=gbrpf32le — convert to floating-point GBR (necessary for linear-light operations).
  3. zscale=p=bt709 — convert color primaries from BT.2020 to BT.709.
  4. tonemap=tonemap=hable:desat=0 — apply Hable tone mapping; desat=0 disables desaturation (color preservation).
  5. zscale=t=bt709:m=bt709:r=tv — convert back to BT.709 transfer function and matrix coefficients.
  6. format=yuv420p — convert to streaming-typical YUV 4:2:0 8-bit.

The pipeline is multi-step because tone mapping is fundamentally a multi-stage color science operation. Each step does one specific transformation; together they produce a properly tone-mapped SDR output.

For different algorithms, change the tonemap filter parameter:

  • tonemap=tonemap=clip — naive clipping.
  • tonemap=tonemap=linear — linear scaling.
  • tonemap=tonemap=reinhard — Reinhard.
  • tonemap=tonemap=hable — Hable filmic (good default for streaming).
  • tonemap=tonemap=mobius — Mobius.

For ACES output transforms or BT.2390 specifically, FFmpeg's built-in tonemap filter doesn't directly support them; external tools (libplacebo via vf_tonemap_opencl, dedicated color management software) are used for these.

#libplacebo for advanced tone mapping

libplacebo (the reference video processing library from the mpv project) implements more sophisticated tone mapping than FFmpeg's built-in filter. FFmpeg can use libplacebo via the libplacebo filter:

ffmpeg -i hdr_input.mp4 -vf \
  "libplacebo=tonemapping=bt2390:colorspace=bt709:color_primaries=bt709:color_trc=bt709:format=yuv420p" \
  -c:v libx264 -crf 22 output.mp4

libplacebo supports BT.2390, ACES-aligned tone mapping, and more advanced color science. For pipelines that need HDR-to-SDR conversion at premium quality, libplacebo is meaningfully better than FFmpeg's built-in filter.

#Tone mapping quality assessment

Quality assessment of tone mapping is partly subjective. Approaches:

Subjective panel — viewers compare tone-mapped output side-by-side with reference HDR display. Accurate; expensive; doesn't scale.

VMAF on tone-mapped output — compute VMAF score of tone-mapped SDR output against an SDR reference. Useful for relative comparison but doesn't capture HDR-to-SDR specific quality factors.

Histograms / waveforms — verify tone-mapped output preserves luminance distribution correctly. Useful for technical QC.

Color metrics — measure color accuracy via Delta E or similar metrics on representative pixels.

For pipeline QC, automated metrics catch gross failures (clipping, complete blackout, color shifts). Subjective assessment by colorists catches subtler quality issues. Both are typically used in premium streaming workflows.

#When tone mapping is wrong

Common tone mapping bugs:

  • Crushed shadows — algorithm clips dark detail. Output looks like every dark scene is solid black.
  • Blown highlights — algorithm clips bright detail. Skies, lights, reflections look like solid white.
  • Color shift — algorithm tone-maps luminance without color awareness. Saturated colors shift hue.
  • Posterization — algorithm produces stepped tonal transitions visible as banding.
  • Halos — spatial algorithm produces visible bright outlines around dark objects (or vice versa).
  • Temporal flicker — per-frame tone mapping varies frame-to-frame; output has visible flicker.
  • SDR-on-HDR — content tagged as HDR but actually SDR creates inverse problem (over-bright output on HDR displays).

For pipelines, tone mapping QC checks for these failure modes specifically, with content-representative test material.

#Operational considerations

Things that matter for tone mapping in production:

  • Choose the algorithm intentionally — Reinhard is fast but flat; Hable is the streaming default; BT.2390 is broadcast standard; ACES is film. Pick based on use case.
  • Calibrate to source mastering range — tone mapping needs the correct source peak luminance (1000 nits, 4000 nits, etc.). Wrong source assumptions produce wrong output.
  • Calibrate to target display — for HDR-to-HDR, target peak luminance affects the curve.
  • Color preservation parameters — desat=0 in FFmpeg's tonemap preserves saturation; default desat=2.0 reduces saturation in highlights. Match to source intent.
  • Verify output color space signaling — the tone-mapped output must be signaled correctly. Mis-signaled tone-mapped output causes downstream confusion.
  • Quality verification — run tone-mapped output through actual HDR-aware QC, not just metrics. Subtle issues require human review.

#What MpegFlow does with tone mapping

MpegFlow's DAG runtime expresses tone mapping as a discrete FfmpegExecutor stage via the tonemap filter. The partitioner persists the tone-map stage to job_stages with explicit dependency tracking; cross-stage data flow wires the upstream HDR rendition output into the tone-map stage's input; per-stage retry handles transient failures; sibling cancellation propagates fatal failures.

Default algorithm is Hable filmic via FFmpeg's tonemap filter; alternative algorithms (mobius, reinhard, clip) are selectable via filter parameters when the worker image's FFmpeg build exposes them.

For multi-format HDR delivery (HDR10 master + SDR variants), parallel sibling rendition stages produce the HDR and SDR outputs from the same upstream source; the tone-map stage feeds the SDR encode rendition while the HDR encode rendition runs without it. Dolby Vision and HDR10+ dynamic-metadata workflows that depend on metadata-aware tone mapping (DV display-mapping metadata, HDR10+ scene-by-scene tone mapping) are operator-side work today — those metadata flows aren't pipeline-native operations.

Workflow YAML configures tone-mapping algorithm, target peak luminance (for HDR-to-HDR), and color-preservation parameters per workflow.

The strict-broker security model handles tone-mapping work like any pipeline payload — workers carry no ambient credentials; content access flows through short-lived presigned URLs scoped per stage; access is disposed on completion.

For customers building HDR-to-SDR pipelines, the conversation typically focuses on algorithm choice (defaults work for most content; premium content benefits from tuning), color-preservation parameters (varies by content type — sports vs drama have different optimal settings), and validation procedures (how do we verify the SDR variant looks correct?). Tone mapping is one of the parts of HDR pipelines that's invisible when correct and visible (washed-out output, color shifts, banding) when not. Get it right; it's the difference between professional HDR delivery and amateur HDR delivery.

Tags
  • tone-mapping
  • hdr
  • sdr
  • color
  • bt2390
  • aces
  • hdr10
See also

Related topics and reading

  • HDR to SDR conversion — the full pipeline from PQ to BT.709
  • HDR10 — the open HDR baseline every premium video pipeline ships
  • Bit depth — 8-bit vs 10-bit vs 12-bit and what it means for video
Building on this?

Join the MpegFlow beta.

We're shipping the encoder MVP this quarter. If you're wrangling color in production, the beta is built for you — no card, no console waiting.

Join the beta More color
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy