NVIDIA collaborates with top academic researchers from 14 universities to present a record 16 research papers to SIGGRAPH 2022. Their collective work tackles graphical hurdles with advancements in content creation, virtual reality, real-time rendering, and 3D simulation. These university collaborations produced a reinforcement learning model that smoothly simulates sports movements, ultra-thin holographic glasses for virtual reality, and a real-time rendering technique for objects illuminated by hidden light sources.
Articles cover the breadth of graphics research, with advances in neural content creation tools, human display and perception, mathematical foundations of computer graphics, and neural rendering. These projects will be on display at SIGGRAPH 2022, taking place August 8-11 in Vancouver and online.
Neural tool for versatile simulated characters
When a reinforcement learning model is used to develop a physics-based animated character, the AI typically learns one skill at a time: walking, running, or perhaps cartwheeling. But researchers from UC Berkeley, the University of Toronto and NVIDIA have created a framework that allows the AI to learn a whole repertoire of skills – demonstrated above with a warrior character who can wield a sword , use a shield and get back up after falling. .
Achieving these smooth, lifelike motions for animated characters is time-consuming and labor-intensive, with the developers training the AI for each new task. As shown in this article, the research team enabled reinforcement learning AI to reuse previously learned skills to respond to new scenarios, improving efficiency and reducing the need for additional motion data.
Tools like this can be used by creators in the fields of animation, robotics, games, and therapeutics. NVIDIA researchers will also present papers on 3D neural tools for surface reconstruction from point clouds and interactive shape editing, plus 2D tools for AI to better understand gaps in vector sketches and improve the visual quality of time-lapse videos.
Bringing Virtual Reality to Lightweight Glasses
3D digital worlds are usually accessible with bulky head-mounted displays, but researchers are working on lightweight alternatives that look like standard glasses. In collaboration with Stanford, researchers have integrated the technology necessary for 3D holographic images into a portable screen a few millimeters thick. The 2.5 millimeter screen is less than half the size of other thin VR displays, known as pancake lenses, which use a technique called folded optics that can only support 2D images. This was accomplished by approaching display quality and display size as a computational problem and co-designing the optics with an AI-powered algorithm.
While previous VR displays required a distance between a magnifying eyepiece and a display panel to create a hologram, this new design uses a Spatial Light Modulator, a tool that can create holograms right in front of the user’s eyes, without needing that space. Additional components – a pupil-replicating waveguide and a geometric phase lens – further reduce the device’s footprint.
This is one of two research papers from Stanford and NVIDIA to be featured along with another paper offering a new computer generated holography frame which improves image quality while optimizing bandwidth usage. A third paper in this area of display and perception research, co-authored with scientists from New York University and Princeton University, measures how rendering quality affects how quickly users respond information on the screen.
Lightbulb Moment: new levels of real-time lighting complexity
Accurately simulating light paths in a real-time scene has always been considered the “holy grail” of graphics. The work detailed in an article by the University of Utah School of Computing and NVIDIA presents a path resampling algorithm that allows real-time rendering of scenes with complex lightingincluding hidden light sources.
This article highlights the use of statistical resampling techniques – where the algorithm reuses calculations thousands of times while tracing these complex light paths – when rendering to effectively approximate real-time light paths. The researchers applied the algorithm to a classic difficult scene in computer graphics, pictured below: a set of indirectly lit metal, ceramic and glass teapots.
Related papers by NVIDIA include a new sampling strategy for inverse volume rendering, a new mathematical representation for manipulating 2D shapes, software for creating samplers with improved uniformity for rendering and other applications, and a way to transform biased rendering algorithms into more efficient unbiased algorithms. .
Neural Rendering: NeRFs and GANs Power Synthetic Scenes
Neural rendering algorithms learn from real-world data to create synthetic images, and NVIDIA research projects are developing state-of-the-art tools to do this in 2D and 3D.
In 2D, the Model StyleGAN-NADA, developed in collaboration with Tel Aviv University, generates images with specific styles based on user text prompts, without requiring sample images for reference. For example, a user can generate images of vintage cars, turn their dog into a painting, or turn houses into huts:
In 3D, NVIDIA researchers are working with the University of Toronto to develop tools that can support the creation of large-scale virtual worlds. Iinstantaneous neural graph primitivesthe NVIDIA paper behind the popular Instant NeRF Toolwill be presented.
NeRFs, 3D scenes based on a collection of 2D images, are a capability of the Neural Graphics Primitives technique. It can be used to represent any complex spatial information, with applications such as image compression, highly accurate representations of 3D shapes, and ultra-high resolution images.
This work partners with a University of Toronto collaboration that compresses 3D neural graphics primitives much like JPEG is used to compress 2D images. It can help users store and share 3D maps and entertainment experiences between small devices such as phones and robots.
NVIDIA has more than 300 researchers worldwide, with teams focused on topics including AI, computer graphics, computer vision, self-driving cars, and robotics.
Learn more about NVIDIA research here.
Dan Sarto is the publisher and editor of Animation World Network.