In this article

BMW Group is building a virtual factory much like that, simulated end-to-end within NVIDIA Omniverse Enterprise, a multi-Graphics Processing Unit (GPU) enabled open platform for virtual collaboration and real-time physically accurate simulation. For many years, federal workers ranging from the military to agency and bureau analysts and decision makers have used the compute and graphics performance of GPUs to convert vast amounts of data into actionable intelligence or information to complete their missions or tasks.

Advancements in GPU platforms now open a new world of virtual collaboration and simulation for 3D design teams across a large ecosystem of public sector firms and contractors to work in real time in a shared virtual space while using different software applications simultaneously.

The power of an advanced GPU platform lies in its ability to handle very large complex data environments, according to Tim Woodard, a senior solutions architect with NVIDIA. Many existing modeling solutions just cannot scale, or apply a massive amount of computer hardware, to perform realistic simulations. 

To that end, NVIDIA is creating true-to-reality, physically based environments to validate the artificial intelligence models used for autonomous vehicles. "In order to do that you have to provide a very realistic representation of the world to stimulate the sensors that will actually be used in a real-world vehicle," Woodard said. "You want this vehicle to believe it is driving in the real world, and that requires a tremendous amount of scene complexity and realism in the way that scene is rendered," he said.

Real Time Ray Tracing (RTX) is the game changer here.

Rasterization versus ray tracing

There are two primary ways to handle rendering in computer graphics: rasterization and ray tracing. Real-time computer graphics have long used rasterization to display three-dimensional (3D) objects on a two-dimensional (2D) screen. With rasterization, objects on the screen are created from a mesh of virtual triangles, or polygons, that create 3D models of objects. Computers then convert the triangles of the 3D models into pixels, or dots, on a 2D screen. For each pixel a color is calculated that utilizes shaders that know about the position of lights in a scene as well as the surface material properties of the polygons. Rasterization is fast, but ray tracing looks far more realistic.

Instead of taking objects and transforming them into pixels, ray tracing starts with pixels and shoots rays into the scene for each pixel of an image and determines what object is intersected by each ray. At that point of intersection, additional rays can be created. The outcome is a very realistic simulation of how light interacts with the environment. 

But ray tracing requires a tremendous amount of computation. Typically, ray tracing rendering has been relegated to offline processing. For example, ray tracing makes movie images look real, but for graphic artists working on a film, it may take days, weeks, or months to render an image which might take only a minute to view. 

NVIDIA has solved that problem by putting dedicated silicon on a microchip, known as RT cores, which allows graphics cards to speed up ray tracing by an order of magnitude, Woodard said. In addition, AI further accelerates operations by inferring what a final frame rendered image would look like even if it takes a few pixel samples. Normally, if a designer applies ray tracing to render a scene, it takes hundreds or thousands of pixel samples to "de-noise" that image. Using AI, designers can take a noisy ray tracing image that is produced quickly with RT cores and infer how the final image will look.

"With that capability, we can interactively in real-time render exceedingly complex environments in a realistic manner across multiple domains for many use cases from simulation—training of robots or developing autonomous vehicles—to design and manufacturing," Woodard said. 

Going forward

NVIDIA Omniverse's GPU-based virtual collaboration and simulation platform is capable of running on multiple types of devices, from a laptop to a server, transforming complex 3D production workflows. Plus, it is built on an open framework, Pixar's Universal Scene Description, so creators, designers, and engineers can unite their assets, libraries, and software applications within the platform to collaborate on design concepts in real time. Individual users can accelerate any workflow with one-click interoperability between leading software tools, while teams can experience seamless collaboration in an interactive, simulated world—onsite or as a cloud solution in a virtualized environment.

WWT and NVIDIA are working together to bring Omniverse Enterprise to the Advanced Technology Center, giving users a secure, consistent, and high-quality experience.

We have extensive experience in helping government agencies define their cloud strategy, and building virtualized cloud solutions can bring it all together by helping agencies build collaborative virtualized environments which allow end-users to be more productive remotely and simplify workflow.

Technologies