Streaming in 4K with Deep Learning's help

Streaming in 4K with Deep Learning's help

2021, Sep 20    

Live streaming is all the rage, and nothing beats doing it in 4K! However, the available bandwidth on mobile devices can be a formidable obstacle for most streamers. To tackle this problem, researchers from the Korea Advanced Institute of Science and Technology infused the power of Deep Learning in a new client-server architecture called liveNAS. The idea, in a nutshell, is to let the users stream video with the quality permitted by their bandwidth and let a server (for example Amazon Web Services, Google Cloud, Microsoft Azure, etc) to scale the video frames using a neural network that continuously learns to map low to high-resolution images.

The main challenge here is to develop an algorithm capable of continuously learning and adapting in real-time. To address this challenge, LiveNAS takes advantage of the power of parallel computing and exploits a beautiful property of our world: most of what we experience is very redundant!

Redundant? Yes! pixels of similar colors appear together in large areas within a frame and remain together in subsequent frames; this is why our eyes evolved to detect edges: the most informative parts of natural scenes.

Due to this redundancy, sending only ten small patches to feed a deep neural network instead of the complete frame is enough to let it learn how to scale low to high-resolution video frames. This idea significantly reduces the training time!

LiveNAS also relies on the availability of “Super-resolution processors” on the server side, that is, powerful GPUs that can process multiple frames simultaneously. The result is rapid automatic image scaling that gives the end-user (TikTok followers, for example) the impression of consuming 4K streaming videos even though their favorite influencer is streaming in a poor-quality format! 🎥