At MWC Los Angeles this week, NVIDIA CEO Jensen Huang demonstrated the company’s CloudXR platform which is made to stream cloud-rendered AR and VR content over 5G connections. Built to support SteamVR/OpenVR content out of the box, Nvidia says it will release a CloudXR SDK to enable companies to offer AR and VR content from the cloud.
Nvidia wants to leverage GPU-based cloud infrastructures to enable businesses to render high-end AR and VR visuals remotely and deliver them to customers over 5G. The idea is to remove VR’s high-end hardware barrier by rendering the visuals in the cloud and streaming them to a host device which itself doesn’t need particularly beefy or expensive hardware. Nvidia already offers a very similar service called GeForce Now, but it’s for traditional games rather than VR.
Now the company says it has developed a cloud-rendering pipeline, specifically supporting SteamVR/OpenVR content, which the company is calling CloudXR. Rather than offer this service directly to customers (as with GeForce Now), Nvidia is positioning CloudXR as a set of tools which other businesses can use to bring AR/VR streaming to their customers. This approach makes sense because one of the key pieces to this puzzle is a 5G network (thanks to its potential for low latency) and Nvidia hopes that carriers who are building out 5G networks will want to offer CloudXR streaming as a way to attract customers to their networks.
NVIDIA CloudXR SDK
Nvidia has talked about the possibility of AR/VR cloud rendering in the past, but this week the company is formally announcing an early release of the CloudXR SDK which can be used as the basis for bringing cloud-rendered AR/VR content to customers. The SDK includes:
- Server driver that runs in the data center
- Easy-to-use client library to enable VR/AR streaming for a multitude of OpenVR applications to Android and Windows devices
- SDK for portable client devices that let application developers easily stream rendered content from the cloud
The system is designed to work with SteamVR/OpenVR content out of the box and be able to stream to client software running on Windows or Android, which could include Windows host PCs, Android-based standalone headsets, or even handheld devices (for handheld AR).
On stage at MWC Los Angeles this week Nvidia demonstrated CloudXR in action in a handheld AR mode. A high fidelity 3D model of a car (rendered in the cloud) was projected onto the stage using a phone as the augmented reality platform (presumably Android, with a CloudXR client integrated with Android’s ARCore tracking).
Nvidia says the system works to “dynamically optimize streaming parameters and maximize image quality and frame rates, so XR experiences can maintain optimal quality under any network condition.” But the company isn’t talking specifics about latency requirements, other than to say that CloudXR offers “no detectable latency difference” compared to a locally rendered view (a claim we’ve heard many times before, but rarely stands up to scrutiny).
The exact setup of the demo on stage isn’t clear at this point, so we aren’t sure if it was a demonstration of the complete data center-to-device pipeline, or a locally rendered example just showing the capabilities of CloudXR without including a networked transmission.
VR/AR Cloud Rendering, 5G, and Edge Networks
Streaming AR and VR from the cloud has long remained a technical possibility that’s ultimately held back much more by latency than bandwidth limitations. While CloudXR is of course designed to be low latency itself, another major part of latency in the cloud rendering pipeline is the network delivery—once a frame leaves the data center, it needs to be transmitted to the headset without adding much more latency.
While 5G does theoretically have lower latency than many existing network infrastructures, so-called ‘edge computing’ is another important piece of the latency puzzle. Low latency is as much a function of the physical distance between the data center and the endpoint as it is a function of the network’s capabilities; edge computing is the concept of locating cloud data centers physically near users to reduce latency.
A single data center in the middle of the continental US, for instance, may have too much latency by the time it reaches the country’s coasts for a viable CloudXR experience. Edge computing proposes using a distributed cluster of data centers, allowing rendering to happen at the data center nearest to each individual user, thereby reducing latency that results from physical distance.
It isn’t clear what kind of latency requirements Nvidia is recommending to make CloudXR viable, but being within range of an edge computing node may be as important to the equation as 5G.
Let’s block ads! (Why?)
Read more here: Road to VR