Deep Render raises £1.6M for image compression tech that mimics ‘neural processes of the human eye’

Deep Render, a London startup and spin-out of Imperial College that is applying machine learning to image compression, has raised £1.6 million in seed funding. Leading the round is Pentech, with participation from Speedinvest.

Founded in mid-2017 by Arsalan Zafar and Chri Besenbruch, who met while studying Computer Science at Imperial College London, Deep Render wants to help solve the data consumption problem that is seeing internet connections choke, especially during peak periods exacerbated by the current lockdown happening in many countries.

Specifically, the startup is taking what it claims is an entirely new approach to image compression, noting that image and video data comprises more than 80% of internet traffic, driven by video-on-demand and live streaming.

“Our ‘Biological Compression’ technology rebuilds media compression from scratch by using the advances of the machine learning revolution and by mimicking the neural processes of the human eye,” explains Deep Render co-founder and CEO Chri Besenbruch.

“Our secret sauce, so to speak, is in the way the data is compressed and sent across the network. The traditional technology relies on various modules each connected to each other – but which don’t actually ‘talk’ to each other. An image is optimised for module one before moving to module two, and it’s then optimised for module two and so on. This not only causes delays, it can cause losses in data which can ultimately reduce the quality and accuracy of the resulting image. Plus, if one stage of optimisation doesn’t work, the other modules don’t know about it so can’t correct any mistakes”.

Deep Render team

To remedy this, Besenbruch says Deep Render’s image compression technology replaces all of these individual components with one very large component that talks across its entire domain. This means that each step of compression logic is connected to the others in what’s known as an “end-to-end” training method.

“What’s more, Deep Render trains its machine learning platform with the end goal in mind,” adds Besenbruch. “This has the benefit of both boosting the efficiency and accuracy of the linear functions and extending the software’s capability to model and perform non-linear functions. Think of it as a line and curve. An image, by its nature, has a lot of curvature from changes in tone, light, brightness and colour. By expanding the compression software’s ability to consider each of these curves means it’s also able to tell which images are more visually pleasing. As humans, we do this intuitively. We know when colour is a little off, or the landscape doesn’t look quite right. We don’t even realise we do this most of the time, but it plays a major role in how we assess images and videos”.

As a proof-of-concept, Deep Render carried out a fairly large-scale Amazon MTurk study, comprising of 5,000 participants, to test its image compression algorithm against BPG (a market standard for image compression, and part of the video compression standard H.265). When asked to compare perceptual quality over the CLIC-Vision dataset, over 95% of participants rated its images more visually pleasing, with Deep Render images being just half the file size.

“Our technological breakthrough represents the foundation for a new class of compression methods,” claims the Deep Render co-founder.

Asked to name direct competitors, Besenbruch says a past-competitor was Magic Pony, the image compression company bought by Twitter for a reported $150 million a year after being founded.

“Magic Pony was also looking at deep learning for solving the challenges of image and video compression,” he explains. “However, Magic Pony looked at improving the traditional compression pipeline via post and pre-processing steps using AI, and thus was ultimately still limited by its restrictions. Deep Render does not want to ‘improve’ the traditional compression pipeline; we are out to destroy it and rebuild it from its ashes”.

To that, Besenbruch says currently the only similar competitors to Deep Render are WaveOne based in Silicon Valley, and TuCodec based in Shanghai. “Deep Render is the European answer to the war about the future of compression technology. All three companies incorporated roughly at the same time,” he adds.

Post by startupsnows.blogspot.com
Deep Render raises £1.6M for image compression tech that mimics ‘neural processes of the human eye’ Deep Render raises £1.6M for image compression tech that mimics ‘neural processes of the human eye’ Reviewed by Unknown on 3:59 AM Rating: 5

No comments:

Powered by Blogger.