A European Union (EU)-funded research project called SceneNet aims to develop technology to combine video feeds from mobile phones held up by different spectators around an arena to reconstruct the event in 3D. SceneNet is getting research funding under the EU's Seventh Framework Programme (FP7).
SceneNet involves several technological challenges: on-device preprocessing that requires immense computer power, efficient transmission of the video streams, development of accurate and fast methods for registration between the video streams, and 3D reconstruction. All of these tasks have to run at near real-time rates.
Israelis Chen and Nizan Sagiv got the idea when they were at a Depeche Mode concert in Tel Aviv five years ago. "While I was busy looking at the show, Nizan was watching the crowds," says Chen, SceneNet project coordinator. "He could not help noticing the huge number of faint lights from mobile-phone screens. People were taking videos of the show. Nizan thought that combining all the videos taken by individuals into a synergetic, enhanced, and possibly 3D video could be an interesting idea. We discussed the concept for many months, but it looked too futuristic, risky, and complicated."
They went to the Israel-Europe R&D directorate (ISERD; Tel Aviv) for advice and contacted Peter Maass of the University of Bremen (Bremen, Germany) and Pierre Vandergheynst of Ecole Polytechnique Fédérale (EPFL; Lausanne, Switzerland), with whom Chen had worked on an earlier FP7 project.
The result is SceneNet, awarded €1.33 million by the European Commission and coordinated by Chen's and Nizan's company SagivTech (Ra'anana, Israel), specialists in computer vision and parallel computing. SceneNet runs until January 2016 and consists of four European partners: the University of Bremen, EPFL, Steinbeis Innovation (Stuttgart, Germany), and European Research Services (Münster, Germany).
The first year of the project has seen the team develop the mobile infrastructure for the video feeds, a mechanism for tagging them, and their transmission to a cloud server. They’ve also developed basic tools for a human-computer interface that will allow users to view the 3D video from any vantage point "in the arena" and edit the film themselves. This, they believe, will help create online communities to share the content. With this in mind, the partners are to study privacy and intellectual property rights issues during the next two years of the project.
Rights and privacy concerns permitting, the technology might also be used to recreate other events in 3D, such as breaking news or sports, or in the tourism or surveillance sectors. The partners are also looking at shooting static, as well as active, objects from various angles, to create instructions that can be sent on to 3D printers.
For more info, see http://scenenet.uni-bremen.de/