You-are-there

High-speed Cameras for Real-time Mixed Reality Scenarios
Imagine being able to stand next to your favorite performer during a once-in-a-lifetime concert - or having reserved seating at the 50-yard line for every game or the chance to "run" beside star players as they charge the goal during a league championship. Now, imagine being able to have these experiences while sitting in your living room, commuting home from a busy day at work, or playing the latest multiplayer game.
Image 1 I Condense Reality uses high-speed GigE Vision cameras from Emergent Vision Technologies to produce immersive mixed reality experiences. – Image: Condense Reality

You-are-there experiences are what immersive media promise to deliver with real-time mixed reality (MR). This new format uses volumetric video data to create 3D images as an event is occurring. Further, multiple people can view the image from different angles on a range of devices.

Capturing Reality in 3D Is Hard

Media companies have been early adopters of technology formats such as 360° video, virtual reality (VR), augmented reality (AR), and MR. Regular, as opposed to real-time, MR blends physical and digital objects in 3D and is usually produced in dedicated spaces with green screens and hundreds of precisely calibrated cameras. Processing the massive amounts of volumetric data captured in each scene requires hours or even days of postproduction time. Trying to do MR in real-time has proven even more technically and economically challenging for content developers and, so far, made the format impractical. „Capturing and synchronizing high-resolution, high-frame-rate video in a controlled space is challenging enough,“ said John Ilett, CEO and founder of Emergent Vision Technologies, a manufacturer of high-speed imaging cameras. „Doing these things in real-time in live venues is even harder.“

Deep Learning Needs an Assist

One startup thought it had a strategy for overcoming those issues. Condense Reality, a volumetric video company, had a plan for capturing objects, reconstructing scenes, and streaming MR content at multiple resolutions to end-user devices. From start to finish, each frame in a live stream would require only milliseconds to complete. „Our software calculates the size and shape of objects in the scene,“ said Condense Reality CEO Nick Fellingham. „If there are any objects the cameras cannot see, the software uses deep learning to fill in the blanks and throw out what isn’t needed, and then stream 3D motion images to phones, tablets, computers, game consoles, headsets, and smart TVs and glasses.“ But there was a hitch. For the software to work in real-world application, Fellingham needed a high-resolution, high-frame-rate camera that content creators could set up easily in a sports stadium, concert venue, or remote location. The company tested cameras, but the models used severely limited data throughput and the cable distance between the cameras and the system’s data processing unit. To move forward, Condense Reality needed a broadcast-quality camera that could handle volumetric data at high speeds.

Seiten: 1 2Auf einer Seite lesen

Das könnte Sie auch Interessieren