MpegFlowBlogBack to home
← Topics·Color

Bit depth — 8-bit vs 10-bit vs 12-bit and what it means for video

Practical reference on video bit depth — banding in 8-bit, why 10-bit is HDR-mandatory, encoder and hardware decoder support, bandwidth implications, and bit-depth selection per use case.

ByMpegFlow Engineering Team·Color
·May 8, 2026·8 min read·1,673 words
In this topic
  1. What bit depth is
  2. Why bit depth matters — banding
  3. Bit depth and HDR
  4. Bit depth and codec profiles
  5. Hardware decode support
  6. Bandwidth implications
  7. When to choose each bit depth
  8. Encoding 10-bit
  9. Bit depth in the source
  10. What MpegFlow does with bit depth

Bit depth — how many bits represent each color channel per pixel — determines how subtly your video can encode color and brightness gradients. 8-bit is the SDR baseline. 10-bit is the HDR baseline (and increasingly the SDR premium baseline too). 12-bit is the cinema and Dolby Vision premium standard. The choice affects encoder support, hardware decode availability, bandwidth cost, and the visibility of banding artifacts in your output. This page is the engineering reference.

#What bit depth is

Bit depth is the number of bits used to encode each color channel per pixel. For 4:2:0 chroma sub-sampled video (the typical streaming format), each pixel has:

  • 1 luma sample (the brightness).
  • 0.25 chroma samples on average (chroma is shared across 2×2 pixel groups in 4:2:0).

The bit depth applies to each sample. Common values:

  • 8-bit — 256 possible values per channel. Total: 24 bits per RGB pixel; ~12 bits per YUV-420 pixel.
  • 10-bit — 1024 possible values per channel. Total: 30 bits per RGB pixel; ~15 bits per YUV-420 pixel.
  • 12-bit — 4096 possible values per channel. Total: 36 bits per RGB pixel; ~18 bits per YUV-420 pixel.
  • 16-bit — 65,536 possible values per channel. Used in mastering and intermediate codecs (ProRes, DNxHR), rarely for consumer delivery.

The 4× jump from 8 to 10 bits matters more than the math suggests perceptually. 8-bit's 256 values can show visible banding in subtle gradients (skies, smoke, shadows); 10-bit's 1024 values eliminates banding in nearly all natural content.

#Why bit depth matters — banding

The visible artifact 8-bit content suffers from is banding — visible step changes in regions of subtle color gradient. Look at a clear blue sky in 8-bit content; you'll often see horizontal bands where the color transitions from one quantization step to the next. The bands are perceptually obvious because human vision is sensitive to spatial gradients.

10-bit eliminates banding in nearly all natural content. The 4× more quantization values produces transitions smooth enough that bands aren't visible. 12-bit eliminates banding in essentially all content including cinema-grade material with extreme dynamic range.

Banding is content-dependent:

  • Worst on flat gradients — clear skies, fog, smoke, fade-to-black transitions.
  • Worst at low contrast — shadow detail and highlight roll-off.
  • Worst on wide displays — larger pixels make individual bands more visible.
  • Hidden by texture and noise — content with film grain, high detail, or motion is less banding-sensitive.

For talking-head content, 8-bit is usually fine. For premium content with extensive flat regions or HDR dynamic range, 10-bit is required to avoid visible banding artifacts.

#Bit depth and HDR

HDR makes 10-bit functionally mandatory rather than optional. The reason: HDR's extended dynamic range stretches the same number of code values across a wider luminance range:

  • 8-bit SDR — 256 values across 100 nits = ~0.4 nits per value. Banding visible in subtle areas.
  • 10-bit SDR — 1024 values across 100 nits = ~0.1 nits per value. No visible banding.
  • 8-bit HDR — 256 values across 1000 nits = ~4 nits per value. Banding very visible everywhere.
  • 10-bit HDR — 1024 values across 1000 nits = ~1 nit per value. Banding still occasionally visible in extreme cases.
  • 12-bit HDR — 4096 values across 10,000 nits = ~2.5 nits per value. Banding rarely visible.

This is why HDR10 mandates 10-bit, Dolby Vision mandates 10-bit (with optional 12-bit), and HLG production typically uses 10-bit. 8-bit HDR exists in the spec but produces visibly worse output than 10-bit HDR.

#Bit depth and codec profiles

Codec support for bit depths varies by profile:

H.264 (AVC):

  • High profile — 8-bit only.
  • High 10 profile — 10-bit support.
  • Production: Most H.264 content is 8-bit High profile.

HEVC:

  • Main profile — 8-bit only.
  • Main 10 profile — 10-bit support.
  • Main 12 profile — 12-bit support.
  • Production: HDR10 uses Main 10. Some premium SDR also uses Main 10 for banding reduction.

AV1:

  • Main profile — 8-bit and 10-bit.
  • High profile — 12-bit.
  • Production: HDR uses 10-bit. SDR can be 8 or 10 depending on quality target.

VP9:

  • Profile 0 — 8-bit.
  • Profile 2 — 10/12-bit (with restrictions).
  • Production: Mostly 8-bit Profile 0; HDR VP9 is rare.

VVC:

  • Supports 8/10/12-bit profiles.
  • Production: Mostly 10-bit for any content using VVC.

For a 2026 production pipeline, the codec/profile/bit-depth combinations:

  • SDR universal: H.264 High (8-bit), HEVC Main (8-bit) — broad compatibility.
  • SDR premium: HEVC Main 10 (10-bit) — banding reduction.
  • HDR universal: HEVC Main 10 (10-bit) — HDR10 baseline.
  • HDR premium: AV1 (10-bit), HEVC Main 12 (12-bit for Dolby Vision profiles).

#Hardware decode support

Bit depth affects hardware decode availability:

8-bit decode — universal. Every device that decodes a codec supports 8-bit. No issues.

10-bit decode — broad but not universal:

  • HEVC Main 10: every device that decodes HEVC supports 10-bit (essentially all consumer devices since 2017).
  • H.264 High 10: NOT supported by most consumer hardware decoders. Software-only on most platforms.
  • AV1 10-bit: supported by every device that decodes AV1.
  • VP9 Profile 2: limited hardware support; software fallback is common.

12-bit decode — limited:

  • HEVC Main 12: supported by some premium TVs and set-top boxes (especially those marketed for Dolby Vision Profile 7). Not universal.
  • AV1 High 12-bit: limited hardware support.

The practical implications:

  • HEVC 10-bit is safe for HDR; 99%+ of HDR-capable devices decode it in hardware.
  • H.264 10-bit breaks playback on most consumer devices. Avoid it for streaming.
  • 12-bit is only for premium Dolby Vision delivery to specifically certified devices.

#Bandwidth implications

10-bit content typically needs 10-25% more bitrate than 8-bit content for equivalent perceptual quality. The reasons:

  • More bits per sample = larger raw stream size.
  • Encoder rate-distortion optimization changes; some codec features designed for 8-bit aren't perfectly tuned for 10-bit.
  • The benefit (no banding) doesn't reduce the bit cost.

The bandwidth cost is content-dependent. For very flat content (animation, talking heads), 10-bit overhead may be 5-10%. For high-detail or high-motion content, the overhead can be 20-25%.

For premium streaming (HDR or premium SDR), the bandwidth cost of 10-bit is justified by the quality improvement. For mass-market streaming where bandwidth dominates economics, 8-bit may still win.

#When to choose each bit depth

The decision per use case:

8-bit Recommended:

  • Mass-market streaming where bandwidth dominates.
  • Floor tier of multi-tier ladders (downlevel devices, bandwidth-constrained).
  • Internal video where banding is acceptable.
  • Anything constrained to H.264 (8-bit High profile).

10-bit Recommended:

  • HDR content (mandatory).
  • Premium SDR streaming where banding visibility matters.
  • Top tiers of multi-tier ladders (where the audience has 10-bit capable hardware).
  • Source mastering and intermediate processing.

12-bit Recommended:

  • Dolby Vision Profile 7 (Blu-ray premium).
  • Some cinema and high-end mastering workflows.
  • Generally not for consumer streaming delivery.

A typical premium streaming pipeline ships 10-bit HDR top tiers + 8-bit SDR fallback tiers. The 10-bit overhead is paid where it matters; the 8-bit floor reaches the broadest audience.

#Encoding 10-bit

ffmpeg with x265 producing 10-bit HEVC:

ffmpeg -i input -c:v libx265 -profile:v main10 -pix_fmt yuv420p10le \
  -preset medium -crf 24 -c:a copy output.hevc

The -pix_fmt yuv420p10le is the key — tells ffmpeg to use 10-bit pixel format. Without it, ffmpeg defaults to 8-bit even with -profile:v main10 set.

For SVT-AV1:

ffmpeg -i input -c:v libsvtav1 -pix_fmt yuv420p10le \
  -preset 6 -crf 28 -c:a copy output.av1

For x264 with 10-bit (uncommon for streaming):

ffmpeg -i input -c:v libx264 -profile:v high10 -pix_fmt yuv420p10le \
  -preset medium -crf 22 -c:a copy output.h264

(Reminder: x264 10-bit is software-only on most consumer hardware. Use only for non-streaming workflows or where you control the player.)

#Bit depth in the source

The bit depth of the source matters. Encoding 8-bit source as 10-bit doesn't gain you the banding benefit — you can't manufacture quantization values that weren't there. Encoding 10-bit source as 8-bit loses the precision (and may introduce banding the source didn't have).

Verify your source bit depth (ffprobe -v error -select_streams v:0 -show_entries stream=pix_fmt input.mp4). Match the encoding bit depth to the source where possible. For pipelines processing 8-bit source, 8-bit output is the natural choice; for 10-bit source, 10-bit output preserves precision.

#What MpegFlow does with bit depth

MpegFlow's DAG runtime exposes bit depth per rendition through workflow YAML; each value flows into the corresponding FfmpegExecutor stage. The partitioner persists each rendition stage to job_stages with explicit dependency tracking and per-stage retry; sibling cancellation propagates fatal failures across rendition stages.

Default bit depths per format:

  • Rec.709 SDR streaming: 8-bit (universal compatibility).
  • Premium SDR streaming: 10-bit HEVC Main 10 (banding reduction for top tiers).
  • HDR streaming (HDR10, HLG): 10-bit (mandatory).
  • Dolby Vision Profile 7: 12-bit (premium / specific profile).

For multi-tier ladders, the partitioner emits parallel rendition stages with per-rung bit depth from one shared workflow value; a typical premium ladder might be 10-bit AV1 + HEVC top tiers + 8-bit HEVC + H.264 floor tiers, all running as parallel sibling stages on the appropriate worker pool.

The CMAF muxer signals bit depth correctly in the segment metadata; downstream packaging consumes the rendition outputs via cross-stage data flow.

The strict-broker security model handles bit-depth work like any pipeline payload — workers carry no ambient credentials; content access flows through short-lived presigned URLs scoped per stage; access is disposed on completion.

For customers asking whether 10-bit is worth it for premium SDR (the most common bit-depth question after HDR), the answer depends on content. We typically run a side-by-side test on representative content — encode at 8-bit and 10-bit, view in identical viewing conditions, measure bandwidth. For content with significant flat-gradient regions (anything with skies, smoke, drama lighting), 10-bit's banding reduction is visible and worth the bandwidth cost. For content that's busy and high-detail, 8-bit may be sufficient. The decision is content-driven; defaults are starting points that need calibration.

Tags
  • bit-depth
  • color
  • 10-bit
  • 12-bit
  • banding
  • hdr
  • encoding
See also

Related topics and reading

  • 10-bit HEVC from 8-bit source — when it helps and when it's pointless
  • Tone mapping — converting HDR to SDR and adapting HDR for different displays
  • Chroma sub-sampling — 4:4:4 vs 4:2:2 vs 4:2:0 and what it means for video
Building on this?

Join the MpegFlow beta.

We're shipping the encoder MVP this quarter. If you're wrangling color in production, the beta is built for you — no card, no console waiting.

Join the beta More color
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy