Get Ready to be Blown Away! NVIDIA's Crazy New Neural Engine is Redefining Realism in Graphics!



Get Ready to be Blown Away! NVIDIA's Crazy New Neural Engine is Redefining Realism in Graphics!

Get Ready to be Blown Away! NVIDIA's Crazy New Neural Engine is Redefining Realism in Graphics!

Real-Time Neural Appearance Models. Hello everyone! Today, we’re talking about the latest breakthrough from #NVIDIA. They’ve developed a new neural engine that can render ultra-realistic video in real-time. This means we’re entering a new era of graphics and video processing that’s faster and more advanced than ever before. In this video, we’ll explore what this new technology can do, and how it’s set to revolutionize a range of industries. So, without further ado, let’s dive in!

Our Discord server ⤵️
https://bit.ly/SECoursesDiscord

If I have been of assistance to you and you would like to show your support for my work, please consider becoming a patron on 🥰 ⤵️
https://www.patreon.com/SECourses

Technology & Science: News, Tips, Tutorials, Tricks, Best Applications, Guides, Reviews ⤵️
https://www.youtube.com/playlist?list=PL_pbwdIyffsnkay6X91BWb9rrfLATUMr3

Playlist of StableDiffusion Tutorials, Automatic1111 and Google Colab Guides, DreamBooth, Textual Inversion / Embedding, LoRA, AI Upscaling, Pix2Pix, Img2Img ⤵️
https://www.youtube.com/playlist?list=PL_pbwdIyffsmclLl0O144nQRnezKlNdx3

Paper and the video link ⤵️
https://research.nvidia.com/labs/rtr/neural_appearance_models/

Info from the paper

Abstract
We present a complete system for real-time #rendering of scenes with complex appearance previously reserved for offline use. This is achieved with a combination of algorithmic and system level innovations.
Our appearance model utilizes learned hierarchical textures that are interpreted using neural decoders, which produce reflectance values and importance-sampled directions. To best utilize the modeling capacity of the decoders, we equip the decoders with two graphics priors. The first prior—transformation of directions into learned shading frames—facilitates accurate reconstruction of mesoscale effects. The second prior—a microfacet sampling distribution—allows the neural decoder to perform importance sampling efficiently. The resulting appearance model supports anisotropic sampling and level-of-detail rendering, and allows baking deeply layered material graphs into a compact unified neural representation.

By exposing hardware accelerated tensor operations to ray tracing shaders, we show that it is possible to inline and execute the neural decoders efficiently inside a real-time path tracer. We analyze scalability with increasing number of neural materials and propose to improve performance using code optimized for coherent and divergent execution. Our neural material shaders can be over an order of magnitude faster than non-neural layered materials. This opens up the door for using film-quality visuals in real-time applications such as games and live previews.

1 INTRODUCTION
Recent progress in rendering algorithms, light transport methods, and ray tracing hardware have pushed the limits of image quality that can be achieved in real time. However, progress in real-time material models has noticeably lagged behind. While deeply layered materials and sophisticated node graphs are commonplace in offline rendering, such approaches are often far too costly to be used in real-time applications. Aside from computational cost, sophisticated materials pose additional challenges for importance sampling and filtering: highly detailed materials will alias severely under minification, and the complex multi-lobe reflectance of layered materials causes high variance if not sampled properly.

Recent work in neural appearance modelling [Kuznetsov et al. 2022; Sztrajman et al. 2021; Zheng et al. 2021] has shown that multilayer perceptrons (MLPs) can be an effective tool for appearance modelling, importance sampling, and filtering. Nevertheless, these models do not support film-quality appearance; a scalable solution that can handle high-fidelity visuals at real time has yet to be demonstrated.
In this paper, we set our goal accordingly: to render film-quality materials, such as those used in the #VFX industry, in real time. These materials prioritize realism and visual fidelity, relying on very high-resolution textures. Layering of reflectance components, rather than an uber-shader, is used to generate material appearance yielding arbitrary BRDF combinations with hundreds of parameters. For these reasons, porting to real-time application is challenging.

In order to render film-quality appearance in real time we i) carefully cherry-pick components from prior works, ii) introduce algorithmic innovations, and iii) develop a scalable solution for inlining neural networks in the innermost rendering loop, both for classical rasterization and path tracing. We choose to forgo editability in favor of performance, effectively “baking” the reference material into a neural texture interpreted by neural networks. Our model can thus be viewed as an optimized representation for fast rendering, which is baked (via optimization) after editing has taken place.

Comments are closed.