Home / Success Stories: ViLynx

Success Stories: ViLynx

Success Stories: ViLynx

We are delighted to present one of our ePlus entrepreneurs, ViLynx Inc, and acknowledge recent developments that are a testament to their success. Case in point, ViLynx is working with machine and deep learning teams at AMAZON and INTEL as part of a MINETUR "VIDATA" supported research project!

The Vilynx - Oscar Chabrera pitch in our ePlus entrepreneurs pitch!

Image

Statement by Oscar Chabrera, ViLynx Co-Founder & EU Manager:

I am glad to announce that ViLynx Inc is working with machine and deep learning teams at AMAZON and INTEL thanks to a MINETUR "VIDATA" (VIdeo big DATA) supported research project.

As you already know ViLynx has been powering INTEL's AGM web site since Last April 2016 and has been using AWS since 2011.

The post written Joe Spisak of Intel (co-written by Andres Rodriguez of Intel, Ravi Panchumarthy of Intel, Hendrik van der Meer of Vilynx, and Juan Carlos Riveiro of ViLynx) shows how ViLynx technology can solve the video discoverability challenge/ opportunity and how to make sense of it. The key drive behind this initiative lines with Cisco’s published report that states 75% of the world’s mobile data traffic will be video by 2020 and mobile video will increase 11-fold between 2015 and 2020.

Thanks to CDTI support for the ViLynx "SEGMENTA" project, ViLynx developed a way for mobile (iOS) and PC viewers to watch an automatically curated 5 second preview of a video’s most interesting scene with just a mouse over or a finger swipe. This gives viewers the opportunity to quickly preview a video before deciding to press the play button to watch it. It’s a similar idea to watching a movie trailer. This technology was extended to Android (mobile and HDTV) thanks to MINECO’s support from the "ConTVlab" project. Thanks to our machine learning and preview technology users everywhere can make better decisions about the videos they watch.

This year ViLynx launched a product focused on Publishers that leverages the technology to generate higher click through rates and longer video engagement times on their websites. This improves they users experience keeps them on site longer watching more videos. The same technology also works for social media by expanding the reach of videos through the preview and the automatic tagging with key words that drive amplification. More video views equal more branding opportunities for publishers and advertisers – and ultimately, more revenue.

Thanks to CDTI and EUREKA-CELTIC support, ViLynx is researching how to apply these technologies to the video medical field that is helping drive many new opportunities. ViLynx leads EUREKA-CELTIC project "E3"(Ehealth Services Everywhere and for Everybody)

The Problem Statement: Video Discoverability is Broken
People love watching online videos; they consume over 10 trillion videos every year – and viewing trends continue to accelerate. On YouTube alone, over 300 hours of content is uploaded every minute. Today, viewers are required to endure a painful and time-consuming process to search and discover interesting videos. This can sometimes include watching up to 30 seconds of pre-roll advertisements before being able to view the video and scrub for relevant content. ViLynx current products offer a Better method for previewing videos so viewers can quickly find what they are looking for – and skip over what they aren’t.

But these is not enough.
The need to automatically extract the most interesting clips from each video in a time-sensitive manner requires heavy-duty computing capabilities. Thanks to VIDATA project cofounded by MINETUR, Vilynx has been working closely with Intel to improve the performance and efficiency of machine learning and deep learning algorithms to enable these automated video searches, solving big data high-resolution images analysis bottleneck without having to reduce the size of the data or parameters so that CPU-to-GPU transfers are not a significant bottleneck.

Few workloads are more compute-intensive than video processing today. Adding deep learning into the mix, the level of complexity and computation is increased to a point beyond what can be achieved using single server configurations. Within the Vilynx stack, video processing and machine learning are used to select the relevant moments from hours of videos and store them in a long-term memory. Once stored, deep learning algorithms use audience preferences to select and display video clips. Finally, semi-supervised machine learning algorithms are fed with matching keywords, metadata, social networks and web data to obtain the most relevant set of key words for a specific video.
The completion of this project proves that it is possible to build massive and intelligent deep neural networks that can understand video content using commodity cloud compute instances – without a high-cost, dedicated hardware solution.

This solution is not only able to learn about user preferences and display content automatically selected as most relevant, but it also enables video discovery and search via a rich set of keywords that are matched to internal moments. In short, for the first time, we are enabling automation of full search functionality inside a video.

For more information about Vilynx current products and research activities, please visit our website http://www.vilynx.com