The following Great Innovative Idea is from Jacob Chakareski from the University of Alabama. Jacob published the “Viewport-adaptive Navigable 360-Degree Video Delivery” paper with Xavier Corbillon (IMT Atlantique), Alisa Devlic (Huawei), and Gwendal Simon (IMT Atlantique). It won the best paper award in the Communications Software, Services and Multimedia Applications category at the 2017 IEEE International Conference on Communications.
Virtual and augmented reality (VR/AR) hold tremendous potential to advance our society and are commonly seen as the 4th major disruptive technology wave after PC, the Internet/Web, and mobile. Together with another pair of emerging technologies, 360-degree video and holographic video, they can suspend our disbelief of being at a remote location or having remote objects/people present in our immediate surrounding, akin to virtual human/object teleportation. Presently limited to offline operation and synthetic content, and targeting gaming and entertainment, VR/AR are expected to reach their potential when deployed online and with real remote scene content, enabling novel applications in disaster relief and public safety, the environmental sciences, transportation, medicine, and quality of life.
There are considerable challenges on the road to such a future due to infrastructure costs and technology limitations. My research aims to address many of these challenges via multiple NSF and industry projects. The raw data rates (5-60 Gbps) required to enable an online immersion experience indistinguishable from real life dramatically exceed the FCC requirements for future broadband networks. Thus, simply introducing more bandwidth (business as usual) will not bridge this gap as the scales demand vs. supply are very different. This necessitates exploring holistic solutions that go beyond the traditional networking domain and integrate the capture, coding, networking, and user navigation of VR/AR data. Moreover, emerging services, e.g., YouTube/Facebook 360 are extremely inefficient in bandwidth utilization and data management, thereby considerably degrading the user experience, due to their heuristic design choices. Finally, further critical aspects such as wireless operation, ultra-low latency, system scalability, edge computing, and end-to-end reliability are yet to be considered.
The paper aims to address the challenges of present 360 streaming practices by designing multiple representations of the same 360 content, characterized with different encoding data rates and quality-emphasized spatial regions (QER). The rest of the 360 panorama outside a QER is encoded at minimum quality. Viewport-adaptive streaming is then carried out, where a user is served the representation that matches his present network bandwidth and the respective QER matches his present viewing direction (viewport), interactively, as the user navigates the 360 content over space and time. The proposed streaming framework ensures an effective utilization of the available network bandwidth, while consistently delivering an uninterrupted high quality of experience to the user. Further advances introduced by the paper are analysis of the required size of QER, analysis of the minimum number of required QER-characterized 360 representations, and easy integration with the present MPEG DASH streaming standard. This last advance, together with the strong gains in operational efficiency enabled by the proposed framework, then led to contributions to the ongoing MPEG VR standardization forum. The paper has been highly influential, acquiring 100 citations in one year. Its software and documentation have been posted online and already inspired a vibrant community of follow-up work and researchers.
This paper is part of my broader research program that explores the fundamental principles behind the above challenges of present VR/AR technologies, leveraging the acquired knowledge to investigate holistically the four critical system aspects of a networked VR/AR application noted above and simultaneously make impact on emerging VR/AR industry practices, as the paper highlights. Major topics I investigate include: (i) fundamentals of VR/AR data capture and its integration with UAV-IoT, (ii) fast structure-aware online machine learning for VR/AR IoT sensor scheduling, (iii) rate-distortion optimized viewport-driven six-degrees-of-freedom (6DOF) 360 streaming and 5G edge delivery, (iv) integration of non-legacy based networking technologies, e.g., millimeter wave, free-space optics, and edge computing, to enable much higher data rates and lower latencies, and (v) interdisciplinary VR/AR. The last topic includes investigations of the integration of VR and soft exosuits for 5D VReality patient rehabilitation, the application of UAV-IoT and VR for first responders and advanced forest fire monitoring, and the delivery of required training for low-vision patient rehabilitation via networked VR and machine intelligence. My research has been generously supported by the NSF, AFOSR, Adobe Research, Tencent Research, NVIDIA, and Microsoft Research.
Presently, I am an assistant professor of Electrical and Computer Engineering at the University of Alabama. Prior to that, I was a postdoctoral scholar and senior research scientist at EPFL, in Switzerland. And before that, I completed my PhD thesis in the Information Systems Laboratory at Stanford University, advised by Prof. Bernd Girod. I actively engage in services yearly and passionately pursue novel technologies via start-up ventures, e.g., I served as a system architect, research scientist, and board member of Vidyo and Frame, two technology pioneers in Internet telepresence and mobile visual cloud computing. Frame was recently acquired by Nutanix. My grand vision is to establish and lead an interdisciplinary research center on next generation VR/AR that will bring together diverse faculty, graduate students, and resident entrepreneurs, to explore fundamental and applied problems arising on the road to the envisioned VR/AR future and develop related societal applications and technology.
My web site can be found here: www.jakov.org