Viewers want to watch live broadcasts as close to real time as possible. This helps them get closer to the event and feel like they actually participate in it. At the same time, viewers appreciate the ability to rewind the broadcast if they miss an important moment.
DVR (Digital Video Recorder) is a rewind function that allows you to watch a live broadcast in replay. If viewers missed a phrase in a webinar, a slide in a presentation, an important statement from the event stage, or a key moment in the final of a sports tournament, they could return to it.
Not all providers can ensure streaming with latencies of less than 4 seconds. And even fewer provide DVR since the function is quite challenging to implement on broadcasts with low latencies.
On G-Core Labs Streaming Platform, the DVR feature is available to all clients by default and works great with low latencies and high stream quality. Recently, we improved this feature, and now you can rewind a live broadcast by 24 hours or more.
Storing a DVR requires quite a large amount of memory. It depends on:
Moreover, users do not just want to be able to rewind the broadcast. They want to watch broadcasts as close to real time as possible, from different cameras, and in the highest possible quality. So, we need high-quality, low-latency streaming.
Low-latency technologies allow you to divide a video not just into segments but into 0.1–0.5-second microsegments and deliver it in parts. However, these microsegments should be effectively cached before the full fragment is transferred while maintaining the rewind function.
Effective caching requires even more memory.
Video fragments and DVR are usually stored in the server memory so that the content can be quickly delivered to end users. But server memory has its limitations. Therefore, making a long enough DVR with low latency is a real challenge.
The standard solution usually used is to increase the amount of memory and the number of disks on servers. On a small scale, it works well. However, the solution turns out to be highly ineffective for long broadcasts with millions of viewers.
Besides this, not all CDNs support file caching before they are fully downloaded from the source. That means that CDN should first receive the entire .ts or .m4v file, and only then can it transfer the file to the player for replay. This does not allow for transferring and playing microsegments at once, which means it increases latency.
The task required a different, more efficient solution. And we found it.
In the new version of the architecture for live streams, we play out video using HTTP/2 or HTTP/1.1 Chunked Transfer Encoding. This allows the player to download the next video stream file before the server has finished creating it. At the same time, the GET and HEAD requests passing through the server are cached in RAM following a special algorithm:
A 24+ hour DVR with latencies of less than 4 seconds is already available by default for all clients of our Streaming Platform. If you’re not our client yet, you can take advantage of the 14-day trial period to test our solution with improved functionality on your own.
Hold large-scale broadcasts for 10+ million viewers with our Streaming Platform.