I collaborated on the creation of an e-book about the lighting and rendering in Unity’s High Definition Render Pipeline. It is a comprehensive guide to help technical artists and developers to further explore the high-end, physically based lighting techniques in HDRP for both PC and console.
In particular, we cover anti-aliasing methods, physically-based lighting & cameras, environment effects (e.g. sky, fog, and clouds), shadows, reflections, as well as post-processing and ray tracing.
Over the past few months, I have worked with a graphics engineer to provide a fast and easy to set up solution for volumetric clouds in Unity’s High Definition Render Pipeline (HDRP). I published an article on the official Unity website describing the technology behind this new rendering feature, its ease of use as well as its known limitations and incoming improvements.
For the promotion of this new system, I created all the corresponding marketing material, such as the entire trailer above and various beauty shots, including terrain and lighting.
In this half-hour session produced for SIGGRAPH 2021, I presented some of the most exciting features introduced in Unity 2021.2, for the High Definition Render Pipeline (HDRP) and Universal Render Pipeline (URP).
I first teach users about sky rendering in Unity, from HDRI sky with distortion support to animated Cloud Layers (up to 8 texture-driven layers). I then move on to showcase the brand-new ray marched Volumetric Clouds for the High Definition Render Pipeline (HDRP).
In the second half, I present the new Lens Flares helping creators to simulate camera lens effects in a couple of clicks, and the new Light Anchor system which dramatically speeds up the production of cutscene and product lighting, thanks to handy controls and presets.
In this webinar originally created for NVIDIA, I demonstrate how to use the ray tracing effects provided by Unity’s High Definition Render Pipeline (HDRP). I showcase in detail ray-traced ambient occlusion, reflections, global illumination, shadows, and path tracing.
I also explain how to use the various debug views to isolate these effects and offer recommendations to ensure a suitable frame rate for a wide variety of use cases, from games to high-end visualizations.
Ray Traced Ambient Occlusion (RTAO)
The screen-space ambient occlusion (SSAO) has been a staple of real-time rendering for games for more than a decade. It is used to simulate the environment’s diffuse occlusion, in order to improve visual contact between objects in the world and darken the lighting in concave areas. However, this effect, when pushed too far, can produce halos around geometries, and even a cartoony look. On top of it, one of the main drawbacks of this screen-space technique is its inability to generate occlusion from objects which reside outside the frame, as it only relies on the depth information available in the Z-buffer. On the plus side, this effect is still great at handling micro-occlusion of small areas in the camera’s perspective, for a relatively low-performance cost.
Hopefully, thanks to Ray Tracing, rays can be shot at surfaces beyond the camera frustum, and therefore they are able to reach objects located outside the frame. This way, you can get great macro-occlusion from large objects located all around the camera. Although technically AO is only a rough approximation of environment lighting, it can complement other lighting techniques such as lightmaps or light probes, whose resolution or density is limited and therefore unable to capture micro-occlusion.
Ray Traced Reflections (RTR)
In a similar fashion to SSAO, screen-space reflections (SSR) can only reflect objects located in the frame: again, surfaces that aren’t immediately visible to the camera cannot be reflected. For instance, looking at the floor will result in the SSR technique being unable to provide any useful information. Therefore, SSR is very approximative, and this technique tends to have many detractors, including yours truly, as a good placement of static Reflection Probes can often provide more appealing and less distracting results for most static scenarios. However, one area where SSR shines literally is when dealing with planar reflections for surfaces parallel to the view direction, such as floors, walls, and ceilings. An optimal use-case for SSR would be a camera whose pitch is locked, such as in a racing game.
With Ray Tracing, however, we are able to get access to information that resides outside the screen, and as a consequence, we can offer a more exact reflection of the world, at least within a certain radius around the camera, defined by the Light Cluster and the length of the rays.
Ray Traced Global Illumination (RTGI)
One of the most impressive features of Ray Tracing is the ability to generate real-time global illumination, that is the simulation of indirect lighting, or simply put, the lighting bouncing in the environment.
Typically, in game engines, the indirect lighting is handled with pre-computed or baking techniques, such as light probes or lightmaps, and they can greatly slow down the iteration time of artists and designers dealing with the lighting.
Thankfully, HDRP offers 2 techniques for RTGI: a Performance and a Quality one. The former is geared towards high frame rate scenarios in direct light, whereas the second one can provide very accurate results in more complex interiors thanks to multiple bounces and samples, for a very high computational cost nonetheless.
Out of the box, when using the High shadow filtering quality (PCSS), HDRP provides great looking shadow maps that simulate the natural smoothness of shadows, while ensuring they remain sharp near the shadow casters, like in real life. However, when using the cheaper Medium filtering quality, results can be underwhelming, as the entire shadow map is filtered uniformly, regardless of the distances between casters and receivers.
Results can be improved dramatically with Ray Traced shadows, by shooting rays from surfaces towards the lights to figure out the amount of occlusion between them. This can therefore provide an extremely realistic approximation of the shadowing, for a moderate performance cost. In addition, HDRP supports transparent shadows!
I was a leading force behind the new scene template for the High Definition Render Pipeline, from early requirements, to final delivery. It features most key features found in HDRP, from the volume system, to the physically based lighting and exposure. Furthermore, I solely created the following trailer.
The new template is set up in a physically based way, with a realistic sun intensity at 100,000 lux and correct exposures for each location. Beginners now have a good setup to start lighting their scenes, and they can experiment confidently with this template, knowing that the lighting is already correctly tuned.
To help users understand how the lighting is set up, I prepared a cheat sheet with important values, with color temperatures and intensities of common light sources, as well as exposure values of typical scenes.
For Unite Now, Unity’s online conference, I created an hour-long video session about Unity’s High Definition Render Pipeline. HDRP is tailored for high-quality visuals on high-end platforms, such as PC and consoles.
Unity’s Lights, Shadows, Reflections, and Volumetric Fog settings are covered in detail, to help Unity users maximize the visual quality in their projects. I end the session by talking about some of the most important post-processing effects, such as tonemapping, white balance, and depth of field.
I present several key features of HDRP, such as the volume system, the anti-aliasing techniques, the exposure system, the volumetric effects, and the different lighting components required to set up the lighting in Unity.
I also describe precisely the physically-based lighting concepts and the photography theory required to light a scene correctly with HDRP, using real-world exposures and light intensities for natural and artificial lights.
I produced these real-time renders with Unity’s High Definition Render Pipeline. They were the fruit of the collaboration between Lexus and Unity in September 2019, to showcase Unity as a virtual production tool for automotive. I was in charge of all the visual quality aspects for this demo, from lighting to materials, as well as post-processing and color grading setups.
In the following official Unity article, I explained in detail how to set up the lighting in the context of this demo and how physically-correct lighting and camera setups can help Unity users to reach an impressive level of visual fidelity for high-end visualizations.
I have published an official and comprehensive Unity expert guide (PDF) on advanced techniques to create high-quality light fixtures for real-time applications. Read it and find out how you can use light cookies and advanced shaders to create convincing artificial light sources in any project made with Unity, from games or architectural visualizations to films and more!
After pointing out some of the common lighting mistakes still found in CGI nowadays and giving you recommendations on how to prevent them, the expert guide walks you through all the steps required to generate beautiful noise-free cookies with a variety of 2D and 3D programs, such as Photoshop, 3ds Max and Unity itself.
Moreover, I will explain how to set up critical post-processing settings in Unity, such as Exposure and Tone Mapping, so that your interior scenes can be lit in a more physically-correct way, one of HDRP’s mottos.
Finally, I introduce an original workflow to generate appealing caustics to bring the final ultra-realistic touch to your light sources, by adding micro-details to simulate the self-reflections of the light fitting and the structural imperfections found in the reflectors and the lampshades.
Shortly after joining Unity, I wrote a Best Practice Guide for Unity’s official documentation, about the lighting and rendering pipelines in Unity, because the multitude of lighting features and permutations in Unity can initially seem daunting. Indeed, Unity provides several lighting systems and rendering pipelines, to accommodate the very large variety of platforms and project types, and it can be difficult to have a great overview of the lighting pipeline.
In the document, I explain important lighting principles and how the renderer generates a frame. To facilitate the understanding of these concepts, I have created several diagrams, decision flowcharts, and tables to give a high-level perspective of the lighting pipeline and to help users decide which render pipeline and which global lighting settings would best suit their requirements.