Attention
You have Cookie disabled in your browser. Displaying the site will be incorrect!

Low-latency broadcasts can now be rewound 24+ hours

Viewers want to watch live broadcasts as close to real time as possible. This helps them get closer to the event and feel like they actually participate in it. At the same time, viewers appreciate the ability to rewind the broadcast if they miss an important moment.

DVR (Digital Video Recorder) is a rewind function that allows you to watch a live broadcast in replay. If viewers missed a phrase in a webinar, a slide in a presentation, an important statement from the event stage, or a key moment in the final of a sports tournament, they could return to it.

Not all providers can ensure streaming with latencies of less than 4 seconds. And even fewer provide DVR since the function is quite challenging to implement on broadcasts with low latencies.

On G-Core Labs Streaming Platform, the DVR feature is available to all clients by default and works great with low latencies and high stream quality. Recently, we improved this feature, and now you can rewind a live broadcast by 24 hours or more.

Why is it hard to implement DVR in low-latency streaming?

Storing a DVR requires quite a large amount of memory. It depends on:

  • DVR duration—the more live broadcast recorded, the more memory needed
  • Thread quantity—the more threads there are, the more memory needed
  • Thread quality—higher-quality threads also require more memory

Moreover, users do not just want to be able to rewind the broadcast. They want to watch broadcasts as close to real time as possible, from different cameras, and in the highest possible quality. So, we need high-quality, low-latency streaming.

Low-latency technologies allow you to divide a video not just into segments but into 0.1–0.5-second microsegments and deliver it in parts. However, these microsegments should be effectively cached before the full fragment is transferred while maintaining the rewind function.

Effective caching requires even more memory.

Video fragments and DVR are usually stored in the server memory so that the content can be quickly delivered to end users. But server memory has its limitations. Therefore, making a long enough DVR with low latency is a real challenge.

How was this task addressed before?

The standard solution usually used is to increase the amount of memory and the number of disks on servers. On a small scale, it works well. However, the solution turns out to be highly ineffective for long broadcasts with millions of viewers.

Besides this, not all CDNs support file caching before they are fully downloaded from the source. That means that CDN should first receive the entire .ts or .m4v file, and only then can it transfer the file to the player for replay. This does not allow for transferring and playing microsegments at once, which means it increases latency.

The task required a different, more efficient solution. And we found it.

How does it work now on our Streaming Platform?

In the new version of the architecture for live streams, we play out video using HTTP/2 or HTTP/1.1 Chunked Transfer Encoding. This allows the player to download the next video stream file before the server has finished creating it. At the same time, the GET and HEAD requests passing through the server are cached in RAM following a special algorithm:

  1. The viewer who first requested a video fragment receives its microsegments with minimal latency as soon as they appear on the server. This ensures an ultra-low TTFB (Time To First Byte) score. The first received microsegments are sent right away, and the rest as soon as they are received from the source.
  2. Users who request the same fragment later receive all microsegments from the first viewer immediately with the maximum speed, and the remaining microsegments with the minimum latency as they arrive from the source.
  3. All microsegments are saved in a separate cache. After that, they are merged and cached like a regular full video fragment. It is these full fragments that are replayed when rewinding.

New algorithm for transferring and caching video segments
New algorithm for transferring and caching video segments

What are the advantages of the new solution?

  • Live broadcasts with latencies of no more than 4 seconds
  • High video quality
  • Possibility to rewind broadcasts for 1–24 hours
  • Adaptive bitrate
  • Multiple streams at the same time
  • Broadcasts for an audience of 1 to 10 million viewers

A 24+ hour DVR with latencies of less than 4 seconds is already available by default for all clients of our Streaming Platform. If you’re not our client yet, you can take advantage of the 14-day trial period to test our solution with improved functionality on your own.

Hold large-scale broadcasts for 10+ million viewers with our Streaming Platform.

Subscribe to a useful newsletter

Favorable offers and important news once a month. No spam