MpegFlowBlogBack to home
← Topics·Protocols

HLS — the protocol that won the streaming wars and what to know about it

Practical reference on HTTP Live Streaming — m3u8 manifests, segment formats, ABR adaptation, low-latency HLS, FairPlay DRM integration, and when HLS is the right protocol.

ByMpegFlow Engineering Team·Protocols
·May 7, 2026·9 min read·1,843 words
In this topic
  1. What HLS is
  2. TS vs fMP4 (CMAF)
  3. ABR — adaptive bitrate
  4. Live HLS
  5. Low-Latency HLS (LL-HLS)
  6. DRM integration
  7. HLS vs DASH
  8. A note on HLS players
  9. HLS edge cases worth knowing
  10. What MpegFlow does with HLS

HLS — HTTP Live Streaming — is Apple's protocol for streaming video over standard HTTP. Apple proposed it in 2009, the IETF published an informational RFC in 2017 (RFC 8216), and it's been the dominant streaming protocol on the internet since iPhone became the dominant streaming client. Every consumer streaming service supports HLS as a delivery option. Every modern player implements an HLS client. If you have to ship video to consumers and you can only support one protocol, HLS is the answer.

This page is the engineering reference: what HLS is, how it works, where it fits next to DASH, and what running HLS in production actually requires.

#What HLS is

HLS is a manifest-and-segment protocol. The video is split into short segments (typically 2-10 seconds each), each delivered as a regular HTTP file. A text manifest — the m3u8 file — describes the segments, their order, and metadata. Players fetch the manifest, then fetch segments, then play.

The key files:

  • Master playlist (.m3u8) — top-level manifest listing alternate variant streams (different resolutions/bitrates) and alternate audio/subtitle tracks. The player picks variants based on bandwidth and capability.
  • Media playlist (.m3u8) — per-variant manifest listing the segment URLs in order. Updated periodically for live; static for VOD.
  • Segments (.ts or .m4s/.mp4) — the actual media files. Originally MPEG-2 TS; modernized to fragmented MP4 (CMAF) since 2016.

A typical m3u8 manifest looks like:

#EXTM3U
#EXT-X-VERSION:6
#EXT-X-TARGETDURATION:6
#EXT-X-MEDIA-SEQUENCE:0
#EXTINF:6.0,
seg-0001.ts
#EXTINF:6.0,
seg-0002.ts
#EXTINF:6.0,
seg-0003.ts
...
#EXT-X-ENDLIST

For live, omit the #EXT-X-ENDLIST marker; the player keeps polling the manifest for new segments. For VOD, the marker tells the player the playlist is complete.

#TS vs fMP4 (CMAF)

Original HLS used MPEG-2 Transport Stream segments — .ts files with the codec data muxed into the TS container format. Apple updated HLS in 2016 to support fragmented MP4 (fMP4) segments — the same container format DASH uses. The CMAF specification (ISO/IEC 23000-19) standardizes fMP4 in a way that both HLS and DASH players can consume.

When to use each:

  • Legacy HLS (TS segments) — required for compatibility with iOS 9 and earlier, plus older smart TVs and embedded set-top boxes from 2015 and earlier.
  • Modern HLS (fMP4 / CMAF) — preferred for everything else. Same segments work for HLS and DASH, which means one set of segments serves both protocols' players. Major operational win.

For a 2026 pipeline, the right answer is CMAF segments delivered via both HLS and DASH manifests, except for the long tail of legacy clients that need TS HLS.

#ABR — adaptive bitrate

HLS supports adaptive bitrate streaming via the master playlist. Each variant stream is a separate ladder rung at a different resolution/bitrate. The master playlist declares the variants:

#EXTM3U
#EXT-X-STREAM-INF:BANDWIDTH=2000000,RESOLUTION=1280x720,CODECS="avc1.4d401f,mp4a.40.2"
720p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=4000000,RESOLUTION=1920x1080,CODECS="avc1.4d4028,mp4a.40.2"
1080p.m3u8
#EXT-X-STREAM-INF:BANDWIDTH=8000000,RESOLUTION=1920x1080,CODECS="hvc1.2.4.L120.B0,mp4a.40.2"
1080p-hevc.m3u8

Player heuristics decide when to switch up or down: download speed, buffer level, decoded frame timing. Different player implementations make different choices — Apple's AVPlayer is conservative (prefers stable mid-tier delivery), hls.js is more aggressive (will reach for the top tier sooner). The encoder doesn't make these decisions; the player does.

The encoder's job is to produce the ladder. Typical 2026 ladders:

Tier Resolution Bitrate Codec
Top 4K (2160p) 12-20 Mbps AV1
High 1440p 6-10 Mbps AV1
High 1080p 4-6 Mbps AV1 + HEVC
Mid 720p 2-4 Mbps HEVC + H.264
Low 540p 1-2 Mbps H.264
Floor 360p 0.5-1 Mbps H.264

The lower tiers exist for bandwidth-constrained playback, not just resolution. A user on a flaky mobile network at 1080p needs a 720p fallback rung even though their device decodes 1080p fine.

#Live HLS

For live streaming, the m3u8 manifest is dynamic. The encoder produces segments continuously; the manifest is updated to reflect the available segments; the player polls the manifest periodically.

Key live-HLS considerations:

  • Target duration (#EXT-X-TARGETDURATION) — segment length in seconds. Apple's recommendation: 6 seconds. Lower segment durations enable faster live-edge tracking but increase request overhead.
  • Live edge latency — how far behind real-time the playhead sits. Standard HLS lives at 15-45 seconds of latency due to segment-based delivery.
  • Discontinuity tags (#EXT-X-DISCONTINUITY) — used when timestamps reset (ad insertion, scene changes). Player resyncs after.
  • Manifest sliding window — for live, only recent segments are in the manifest. Older segments fall off as new ones arrive.
  • EVENT vs LIVE playlists — #EXT-X-PLAYLIST-TYPE:EVENT keeps all segments in the manifest (DVR-style); LIVE is a sliding window. Most live streams use the implicit live model.

#Low-Latency HLS (LL-HLS)

Apple introduced LL-HLS in 2019 to bring HLS latency from 15-45s down to 2-3s. The mechanism is partial segments and blocking playlist reload:

  • Partial segments — segments are split into smaller parts (typically 200-500ms each). Players can fetch parts as they're produced, before the full segment is complete.
  • Blocking playlist reload — players request the manifest with a hint about which sequence they're waiting for; the server holds the response open until the next part is available. Eliminates the polling-cycle latency.
  • Preload hints — manifest indicates which part will be next; player can preconnect or preload.
  • HTTP/2 server push — server pushes parts to the player as they're produced, eliminating per-part request latency.

LL-HLS is a real protocol now; Apple's implementation works on iOS 14+, Safari, and tvOS. The hls.js implementation supports LL-HLS in current versions. CDN support has caught up — Akamai, Cloudflare, Fastly, AWS CloudFront all support LL-HLS delivery.

For sub-3-second latency on consumer-grade infrastructure, LL-HLS is the practical answer. For sub-1-second, you're in WebRTC territory, with the trade-offs that brings.

#DRM integration

HLS supports two DRM systems out of the box:

  • FairPlay (Apple) — required for Apple device playback of premium content. Uses license URLs delivered via the manifest, with key delivery through Apple's signing protocol.
  • AES-128 (clear-key) — basic encryption, no DRM-grade key protection. Useful for entitlement gating, not for premium content.

For multi-DRM support across HLS and DASH, the right approach is CMAF segments with Common Encryption (CENC) using AES-CBC. The same segments are decrypted by FairPlay (Apple) and Widevine + PlayReady (everything else). One encryption pass, multiple downstream DRM systems. This is why CMAF is operationally important.

#HLS vs DASH

The streaming protocol war was DASH vs HLS for ~10 years. The honest 2026 answer:

  • HLS wins on Apple ecosystem — Safari, iOS, tvOS support HLS natively. DASH playback in Safari requires hls.js-equivalent JavaScript, which is awkward for live.
  • DASH wins on standardization — MPEG-DASH is an ISO standard with a more rigorous spec. HLS is informational RFC + Apple-driven extensions.
  • DASH wins on flexibility — DASH manifests support more sophisticated content models (multiple periods, ad insertion, switchable language tracks, etc.).
  • HLS wins on simplicity — m3u8 is simpler than MPD. For most streaming use cases, the simplicity is an advantage.

The practical 2026 answer for most streaming services: deliver both. CMAF segments shared between HLS and DASH manifests. Player support determines which one the client uses. Operationally one set of segments, two manifests.

#A note on HLS players

HLS clients matter because the protocol is only as good as its players. The major implementations:

  • AVPlayer (Apple) — native iOS/macOS/tvOS player. Handles HLS automatically when you point it at an m3u8 URL. Conservative ABR heuristics. Battery-efficient. The reason HLS-on-Apple "just works" — you don't pick a player, the OS does it.
  • hls.js — open-source HLS player for browsers without native HLS (Chrome, Firefox, Edge on desktop). Most-deployed HLS player on the web. Aggressive ABR heuristics. Active development; LL-HLS support stable since 2022.
  • Shaka Player (Google) — DASH-first player that also supports HLS. Common choice for sites that need both protocols from one player.
  • Video.js — wraps either hls.js or Shaka. Common choice for sites that want a player UI library plus HLS playback.
  • ExoPlayer (Android) — Google's Android media player. Handles HLS and DASH. The de facto Android streaming player.

Player choice affects user experience more than most teams expect. The same HLS stream behaves differently across these players in ABR aggressiveness, error recovery, and live-edge tracking. Test your stream against the players your audience actually uses, not just one reference player.

#HLS edge cases worth knowing

A few things HLS doesn't handle well or where the spec is ambiguous:

  • Audio-only streams — supported but operationally awkward. Some players assume video presence and break on audio-only m3u8.
  • Trick play (fast-forward, scrub) — HLS supports trick play via I-frame-only playlists (#EXT-X-I-FRAME-STREAM-INF). Most consumer players don't generate them; players that do (Apple AVPlayer for some content) get smoother scrubbing. Producing them is straightforward at packaging time and worth doing for premium VOD.
  • Timed metadata (subtitles, captions, custom data) — handled via separate variant streams or in-band ID3/EMSG markers. Discovery is sometimes broken in older players.
  • Manifest-update frequency under load — for live HLS, if your CDN caches the manifest too long, players see stale segments and stutter. Manifest cache TTLs need to be lower than segment duration.

#What MpegFlow does with HLS

MpegFlow's DAG runtime expresses an HLS ladder as a multi-stage workflow. The partitioner splits the workflow into parallel rendition stages (one FfmpegExecutor stage per ladder rung) and a downstream packaging stage that emits the master playlist, the per-rendition variant playlists, and the segment files. Each stage is persisted to job_stages with explicit dependency tracking; per-stage retry handles transient executor failures; sibling cancellation propagates so a fatal failure on one rendition doesn't waste compute on dependents. Rendition-level partial-success reporting means the customer sees granular per-stage state when one rung fails and others succeed.

Today's HLS packaging path is FfmpegExecutor-driven (the HLS muxer in FFmpeg). Shaka Packager integration is on the Phase 2D roadmap and is not currently a runtime executor — customers needing Shaka-specific packaging features (advanced multi-DRM signaling, certain low-latency variants, specific manifest dialects) handle them in their own packaging tooling alongside MpegFlow today. The same applies to CENC/multi-DRM packaging: it's roadmap, not shipped.

For LL-HLS specifically, partial-segment generation is constrained by what the FFmpeg HLS muxer exposes; production-grade LL-HLS for tight latency targets typically calls for dedicated packagers, which is part of why Phase 2D matters.

The strict-broker security model handles HLS packaging like any pipeline payload — workers carry no ambient credentials; content access flows through short-lived presigned URLs scoped per stage; access is disposed on completion. There is no IAM role on the worker, no long-lived secret, no shared credential.

The packaging side of HLS is where most pipeline bugs live in our experience. Manifest validity, segment durations matching declared values, timestamp continuity across segments, codec string correctness in the master playlist — these are the things that fail silently and break playback in subtle ways. We exercise the standard HLS conformance tests (Apple Media Stream Validator, hls.js test corpus) against pipeline output during regression validation.

Tags
  • hls
  • streaming
  • protocols
  • m3u8
  • apple
  • cmaf
  • low-latency
See also

Related topics and reading

  • CMAF — the segment format that ended the HLS-vs-DASH duplicate-encoding problem
  • DASH — the standardized streaming protocol that won the spec war and lost the install base
  • Architecture
    Live ingest + low-latency packaging
    Production architecture for live broadcast video
Building on this?

Join the MpegFlow beta.

We're shipping the encoder MVP this quarter. If you're wrangling protocols in production, the beta is built for you — no card, no console waiting.

Join the beta More protocols
© 2026 MpegFlow, Inc. · Trust & complianceAll systems nominal·StatusPrivacy