What are you looking for?
Hero background image

Unity Publications

At Unity, we do research for Graphics, AI, Performance and much more. We share that research with you and the community through talks, conferences and journals. See below for the latest publications.

2022

Practical Hex Tiling demo
Practical Real-Time Hex-Tiling

Morten S. Mikkelsen - Journal of Computer Graphics Techniques (JCGT)

To provide a convenient and easy-to-adopt approach to randomly tiled textures in the context of real-time graphics, we propose an adaptation of the by-example noise algorithm of Heitz and Neyret. The original method preserves contrast using a histogram-preserving method which requires a precomputation step to convert the source texture into a transform and inverse transform texture, both of which must be sampled in the shader rather than the original source texture. Thus, deep integration into the application is required for this to appear opaque to the author of the shader and material. In our adaptation we omit histogram preservation and replace it with a novel blending method which allows us to sample the original source texture. This omission is particularly sensible for a normal map, since it represents the partial derivatives of a height map. In order to diffuse the transition between hex tiles we introduce a simple metric to adjust the blending weights. For a texture of color we reduce loss of contrast by applying a contrast function directly to the blending weights. Though our method works for color, we emphasize the use case of normal maps in our work because non-repetitive noise is ideal for mimicking surface detail by perturbing normals.

ProtoRes demonstrations
ProtoRes: Proto-Residual Architecture for Deep Modeling of Human Pose

Boris N. Oreshkin, Florent Bocquelet, Félix G. Harvey, Bay Raitt, Dominic Laflamme - ICLR 2022 (Oral, top 5% of accepted papers)

Our work focuses on the development of a learnable neural representation of human pose for advanced AI assisted animation tooling. Specifically, we tackle the problem of constructing a full static human pose based on sparse and variable user inputs (e.g. locations and/or orientations of a subset of body joints). To solve this problem, we propose a novel neural architecture that combines residual connections with prototype encoding of a partially specified pose to create a new complete pose from the learned latent space. We show that our architecture outperforms a baseline based on Transformer, both in terms of accuracy and computational efficiency. Additionally, we develop a user interface to integrate our neural model in Unity, a real-time 3D development platform. Furthermore, we introduce two new datasets representing the static human pose modeling problem, based on high-quality human motion capture data, which will be released publicly along with model code.

Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies
Htex: Per-Halfedge Texturing for Arbitrary Mesh Topologies

Wilhem Barbier, Jonathan Dupuy - HPG 2022

We introduce per-halfedge texturing (Htex) a GPU-friendly method for texturing arbitrary polygon-meshes without an explicit parameterization. Htex builds upon the insight that halfedges encode an intrinsic triangulation for polygon meshes, where each halfedge spans a unique triangle with direct adjacency information. Rather than storing a separate texture per face of the input mesh as is done by previous parameterization-free texturing methods, Htex stores a square texture for each halfedge and its twin. We show that this simple change from face to halfedge induces two important properties for high performance parameterization-free texturing. First, Htex natively supports arbitrary polygons without requiring dedicated code for, e.g, non-quad faces. Second, Htex leads to a straightforward and efficient GPU implementation that uses only three texture-fetches per halfedge to produce continuous texturing across the entire mesh. We demonstrate the effectiveness of Htex by rendering production assets in real time.

A Data-Driven Paradigm for Precomputed Radiance Transfer
A Data-Driven Paradigm for Precomputed Radiance Transfer

Laurent Belcour, Thomas Deliot, Wilhem Barbier, Cyril Soler - HPG 2022

In this work, we explore a change of paradigm to build Precomputed Radiance Transfer (PRT) methods in a data-driven way. This paradigm shift allows us to alleviate the difficulties of building traditional PRT methods such as defining a reconstruction basis, coding a dedicated path tracer to compute a transfer function, etc. Our objective is to pave the way for Machine Learned methods by providing a simple baseline algorithm. More specifically, we demonstrate real-time rendering of indirect illumination in hair and surfaces from a few measurements of direct lighting. We build our baseline from pairs of direct and indirect illumination renderings using only standard tools such as Singular Value Decomposition (SVD) to extract both the reconstruction basis and transfer function.

A Halfedge Refinement Rule for Parallel Loop Subdivision
A Halfedge Refinement Rule for Parallel Loop Subdivision

Laurent Belcour, Thomas Deliot, Wilhem Barbier, Cyril Soler - HPG 2022

In this work, we explore a change of paradigm to build Precomputed Radiance Transfer (PRT) methods in a data-driven way. This paradigm shift allows us to alleviate the difficulties of building traditional PRT methods such as defining a reconstruction basis, coding a dedicated path tracer to compute a transfer function, etc. Our objective is to pave the way for Machine Learned methods by providing a simple baseline algorithm. More specifically, we demonstrate real-time rendering of indirect illumination in hair and surfaces from a few measurements of direct lighting. We build our baseline from pairs of direct and indirect illumination renderings using only standard tools such as Singular Value Decomposition (SVD) to extract both the reconstruction basis and transfer function.

Bringing Linearly Transformed Cosines to Anisotropic GGX
Bringing Linearly Transformed Cosines to Anisotropic GGX

Aakash KT, Eric Heitz, Jonathan Dupuy, P.J. Narayanan - I3D 2022

Linearly Transformed Cosines (LTCs) are a family of distributions that are used for real-time area-light shading thanks to their analytic integration properties. Modern game engines use an LTC approximation of the ubiquitous GGX model, but currently this approximation only exists for isotropic GGX and thus anisotropic GGX is not supported. While the higher dimensionality presents a challenge in itself, we show that several additional problems arise when fitting, post-processing, storing, and interpolating LTCs in the anisotropic case. Each of these operations must be done carefully to avoid rendering artifacts. We find robust solutions for each operation by introducing and exploiting invariance properties of LTCs. As a result, we obtain a small 8^4 look-up table that provides a plausible and artifact-free LTC approximation to anisotropic GGX and brings it to real-time area-light shading.

Rendering Layered Materials with Diffuse Interfaces
Rendering Layered Materials with Diffuse Interfaces

Heloise de Dinechin, Laurent Belcour - I3D 2022

In this work, we introduce a novel method to render, in real-time, Lambertian surfaces with a rough dielectric coating. We show that the appearance of such configurations is faithfully represented with two microfacet lobes accounting for direct and indirect interactions respectively. We numerically fit these lobes based on the first order directional statistics (energy, mean and variance) of light transport using 5D tables and narrow them down to 2D + 1D with analytical forms and dimension reduction. We demonstrate the quality of our method by efficiently rendering rough plastics and ceramics, closely matching ground truth. In addition, we improve a state-of-the-art layered material model to include Lambertian interfaces.

2021-2019

FC-GAGA: Fully Connected Gated Graph Architecture for Spatio-Temporal Traffic Forecasting

Boris N. Oreshkin, Arezou Amini, Lucy Coyle, Mark J. Coates (AAAI 2021)

Forecasting of multivariate time-series is an important problem that has applications in traffic management, cellular network configuration, and quantitative finance. A special case of the problem arises when there is a graph available that captures the relationships between the time-series. In this paper we propose a novel learning architecture that achieves performance competitive with or better than the best existing algorithms, without requiring knowledge of the graph. The key element of our proposed architecture is the learnable fully connected hard graph gating mechanism that enables the use of the state-of-the-art and highly computationally efficient fully connected time-series forecasting architecture in traffic forecasting applications. Experimental results for two public traffic network datasets illustrate the value of our approach, and ablation studies confirm the importance of each element of the architecture.

A Sliced Wasserstein Loss for Neural Texture Synthesis

Eric Heitz, Kenneth Vanhoey, Thomas Chambon, Laurent Belcour - To appear in CVPR 2021

We address the problem of computing a textural loss based on the statistics extracted from the feature activations of a convolutional neural network optimized for object recognition (e.g. VGG-19). The underlying mathematical problem is the measure of the distance between two distributions in feature space. The Gram-matrix loss is the ubiquitous approximation for this problem but it is subject to several shortcomings. Our goal is to promote the Sliced Wasserstein Distance as a replacement for it. It is theoretically proven,practical, simple to implement, and achieves results that are visually superior for texture synthesis by optimization or training generative neural networks.

Improved Shader and Texture Level of Detail Using Ray Cones
Improved Shader and Texture Level of Detail Using Ray Cones

Tomas Akenine-Möller, Cyril Crassin, Jakub Boksansky, Laurent Belcour, Alexey Panteleev, Oli Wright - Published in Journal of Computer Graphics Techniques (JCGT)

In real-time ray tracing, texture filtering is an important technique to increase image quality. Current games, such as Minecraft with RTX on Windows 10, use ray cones to determine texture-filtering footprints. In this paper, we present several improvements to the ray-cones algorithm that improve image quality and performance and make it easier to adopt in game engines. We show that the total time per frame can decrease by around 10% in a GPU-based path tracer, and we provide a public-domain implementation.

Bringing an Accurate Fresnel to Real-Time Rendering: a Preintegrable Decomposition
Bringing an Accurate Fresnel to Real-Time Rendering: a Preintegrable Decomposition

Laurent Belcour, Megane Bati, Pascal Barla - Published in ACM SIGGRAPH 2020 Talks and Courses

We introduce a new approximate Fresnel reflectance model that enables the accurate reproduction of ground-truth reflectance in real-time rendering engines. Our method is based on an empirical decomposition of the space of possible Fresnel curves. It is compatible with the preintegration of image-based lighting and area lights used in real-time engines. Our work permits to use a reflectance parametrization [Gulbrandsen 2014] that was previously restricted to offline rendering.

Concurrent Binary Trees
Concurrent Binary Trees

Jonathan Dupuy - HPG 2020

We introduce the concurrent binary tree (CBT), a novel concurrent representation to build and update arbitrary binary trees in parallel. Fundamentally, our representation consists of a binary heap, i.e., a 1D array, that explicitly stores the sum-reduction tree of a bitfield. In this bitfield, each one-valued bit represents a leaf node of the binary tree encoded by the CBT, which we locate algorithmically using a binary-search over the sum-reduction. We show that this construction allows to dispatch down to one thread per leaf node and that, in turn, these threads can safely split and/or remove nodes concurrently via simple bitwise operations over the bitfield. The practical benefit of CBTs lies in their ability to accelerate binary-tree-based algorithms with parallel processors. To support this claim, we leverage our representation to accelerate a longest-edgebisection-based algorithm that computes and renders adaptive geometry for large-scale terrains entirely on the GPU. For this specific algorithm, the CBT accelerates processing speed linearly with the number of processors.

Can’t Invert the CDF? The Triangle-Cut Parameterization of the Region under the Curve
Can’t Invert the CDF? The Triangle-Cut Parameterization of the Region under the Curve

Eric Heitz - EGSR 2020

We present an exact, analytic and deterministic method for sampling densities whose Cumulative Distribution Functions (CDFs) cannot be inverted analytically. Indeed, the inverse-CDF method is often considered the way to go for sampling non-uniform densities. If the CDF is not analytically invertible, the typical fallback solutions are either approximate, numerical, or nondeterministic such as acceptance-rejection. To overcome this problem, we show how to compute an analytic area-preserving parameterization of the region under the curve of the target density. We use it to generate random points uniformly distributed under the curve of the target density and their abscissae are thus distributed with the target density. Technically, our idea is to use an approximate analytic parameterization whose error can be represented geometrically as a triangle that is simple to cut out. This triangle-cut parameterization yields exact and analytic solutions to sampling problems that were presumably not analytically resolvable.

Rendering Layered Materials with Anisotropic Interfaces
Rendering Layered Materials with Anisotropic Interfaces

Philippe Weier, Laurent Belcour - Published in Journal of Computer Graphics Techniques (JCGT)

We present a lightweight and efficient method to render layered materials with anisotropic interfaces. Our work extends our previously published statistical framework to handle anisotropic microfacet models. A key insight to our work is that when projected on the tangent plane, BRDF lobes from an anisotropic GGX distribution are well approximated by ellipsoidal distributions aligned with the tangent frame: its covariance matrix is diagonal in this space. We leverage this property and perform the isotropic layered algorithm on each anisotropy axis independently. We further update the mapping of roughness to directional variance and the evaluation of the average reflectance to account for anisotropy.

Integration and Simulation of Bivariate Projective-Cauchy Distributions within Arbitrary Polygonal Domains

Jonathan Dupuy, Laurent Belcour & Eric Heitz - Technical Report 2019

Consider a uniform variate on the unit upper-half sphere of dimension d. It is known that the straight-line projection through the center of the unit sphere onto the plane above it distributes this variate according to a d-dimensional projective-Cauchy distribution. In this work, we leverage the geometry of this construction in dimension d=2 to derive new properties for the bivariate projective-Cauchy distribution. Specifically, we reveal via geometric intuitions that integrating and simulating a bivariate projective-Cauchy distribution within an arbitrary domain translates into respectively measuring and sampling the solid angle subtended by the geometry of this domain as seen from the origin of the unit sphere. To make this result practical for, e.g., generating truncated variants of the bivariate projective-Cauchy distribution, we extend it in two respects. First, we provide a generalization to Cauchy distributions parameterized by location-scale-correlation coefficients. Second, we provide a specialization to polygonal-domains, which leads to closed-form expressions. We provide a complete MATLAB implementation for the case of triangular domains, and briefly discuss the case of elliptical domains and how to further extend our results to bivariate Student distributions.

Surface Gradient Based Bump Mapping Framework
Surface Gradient Based Bump Mapping Framework

Morten Mikkelsen 2020

In this paper a new framework is proposed for layering/compositing of bump/normal maps including support for both multiple sets of texture coordinates as well as procedurally generated texture coordinates and geometry. Furthermore, we provide proper support and integration for bump maps defined on a volume such as decal projectors, triplanar projection and noise–based functions.

Multi-Stylization of Video Games in Real-Time guided by G-buffer Information

Adèle Saint-Denis, Kenneth Vanhoey, Thomas Deliot HPG 2019

We investigate how to take advantage of modern neural style transfer techniques to modify the style of video games at runtime. Recent style transfer neural networks are pre-trained, and allow for fast style transfer of any style at runtime. However, a single style applies globally, over the full image, whereas we would like to provide finer authoring tools to the user. In this work, we allow the user to assign styles (by means of a style image) to various physical quantities found in the G-buffer of a deferred rendering pipeline, like depth, normals, or object ID. Our algorithm then interpolates those styles smoothly according to the scene to be rendered: e.g., a different style arises for different objects, depths, or orientations.

2019-2018

Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames
Distributing Monte Carlo Errors as a Blue Noise in Screen Space by Permuting Pixel Seeds Between Frames

Eric Heitz, Laurent Belcour - EGSR 2019

We introduce a sampler that generates per-pixel samples achieving high visual quality thanks to two key properties related to the Monte Carlo errors that it produces. First, the sequence of each pixel is an Owen-scrambled Sobol sequence that has state-of-the-art convergence properties. The Monte Carlo errors have thus low magnitudes. Second, these errors are distributed as a blue noise in screen space. This makes them visually even more acceptable. Our sam-pler is lightweight and fast. We implement it with a small texture and two xor operations. Our supplemental material provides comparisons against previous work for different scenes and sample counts.

A Low-Discrepancy Sampler that Distributes Monte Carlo Errors as a Blue Noise in Screen Space
A Low-Discrepancy Sampler that Distributes Monte Carlo Errors as a Blue Noise in Screen Space

Eric Heitz, Laurent Belcour - ACM SIGGRAPH Talk 2019

We introduce a sampler that generates per-pixel samples achieving high visual quality thanks to two key properties related to the Monte Carlo errors that it produces. First, the sequence of each pixel is an Owen-scrambled Sobol sequence that has state-of-the-art convergence properties. The Monte Carlo errors have thus low magnitudes. Second, these errors are distributed as a blue noise in screen space. This makes them visually even more acceptable. Our sampler is lightweight and fast. We implement it with a small texture and two xor operations. Our supplemental material provides comparisons against previous work for different scenes and sample counts.

A Low-Distortion Map Between Triangle and Square
A Low-Distortion Map Between Triangle and Square

Eric Heitz - Tech Report 2019

We introduce a low-distortion map between triangle and square. This mapping yields an area-preserving parameterization that can be used for sampling random points with a uniform density in arbitrary triangles. This parameterization presents two advantages compared to the square-root param-eterization typically used for triangle sampling. First, it has lower distortions and better preserves the blue-noise properties of the input samples. Second, its computation relies only on arithmetic operations (+, *), which makes it faster to evaluate.

Sampling the GGX Distribution of Visible Normals

Eric Heitz - JCGT 2018

Importance sampling microfacet BSDFs using their Distribution of Visible Normals (VNDF) yields significant variance reduction in Monte Carlo rendering. In this article, we describe an efficient and exact sampling routine for the VNDF of the GGX microfacet distribution. This routine leverages the property that GGX is the distribution of normals of a truncated ellipsoid and sampling the GGX VNDF is equivalent to sampling the 2D projection of this truncated ellipsoid. To do that, we simplify the problem by using the linear transformation that maps the truncated ellipsoid to a hemisphere. Since linear transformations preserve the uniformity of projected areas, sampling in the hemisphere configuration and transforming the samples back to the ellipsoid configuration yields valid samples from the GGX VNDF.

Analytical Calculation of the Solid Angle Subtended by an Arbitrarily Positioned Ellipsoid to a Point Source

Eric Heitz - Nuclear Instruments and Methods in Physics Research 2018

We present a geometric method for computing an ellipse that subtends the same solid-angle domain as an arbitrarily positioned ellipsoid. With this method we can extend existing analytical solid-angle calculations of ellipses to ellipsoids. Our idea consists of applying a linear transformation on the ellipsoid such that it is transformed into a sphere from which a disk that covers the same solid-angle domain can be computed. We demonstrate that by applying the inverse linear transformation on this disk we obtain an ellipse that subtends the same solid-angle domain as the ellipsoid. We provide a MATLAB implementation of our algorithm and we validate it numerically.

A note on track-length sampling with non-exponential distributions

Eric Heitz, Laurent Belcour - Tech Report 2018

Track-length sampling is the process of sampling random intervals according to a distance distribution. It means that, instead of sampling a punctual distance from the distance distribution, track-length sampling generates an interval of possible distances.The track-length sampling process is correct if the expectation of the intervals is the target distance distribution. In other words, averaging all the sampled intervals should converge towards the distance distribution as their number increases. In this note, we emphasize that the distance distribution that is used for sampling punctual distances and the track-length distribution that is used for sampling intervals are not the same in general. This difference can be surprising because, to our knowledge, track-length sampling has been mostly studied in the context of transport theory where the distance distribution is typically exponential: in this special case, the distance distribution and the track-length distribution happens to be both the same exponential distribution. However, they are not the same in general when the distance distribution is non-exponential.

Combining Analytic Direct Illumination and Stochastic Shadows
Combining Analytic Direct Illumination and Stochastic Shadows

Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA) - I3D 2018 (short paper) (Best Paper Presentation Award)

In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affecting the exactness of the final result given by their product. This formulation broadens the utility of analytic illumination techniques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shadows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Non-Periodic Tiling of Procedural Noise Functions
Non-Periodic Tiling of Procedural Noise Functions

Aleksandr Kirillov - HPG 2018

Procedural noise functions have many applications in computer graphics, ranging from texture synthesis to atmospheric effect simulation or to landscape geometry specification. Noise can either be precomputed and stored into a texture, or evaluated directly at application runtime. This choice offers a trade-off between image variance, memory consumption and performance.

Advanced tiling algorithms can be used to decrease visual repetition. Wang tiles allow a plane to be tiled in a non-periodic way, using a relatively small set of textures. Tiles can be arranged in a single texture map to enable the GPU to use hardware filtering.

In this paper, we present modifications to several popular procedural noise functions that directly produce texture maps containing the smallest complete Wang tile set. The findings presented in this paper enable non-periodic tiling of these noise functions and textures based on them, both at runtime and as a preprocessing step. These findings also allow decreasing repetition of noise-based effects in computer-generated images at a small performance cost, while maintaining or even reducing the memory consumption.

High-Performance By-Example Noise using a Histogram-Preserving Blending Operator
High-Performance By-Example Noise using a Histogram-Preserving Blending Operator

Eric Heitz, Fabrice Neyret (Inria) - HPG 2018 (Best Paper Award)

We propose a new by-example noise algorithm that takes as input a small example of a stochastic texture and synthesizes an infinite output with the same appearance. It works on any kind of random-phase inputs as well as on many non-random-phase inputs that are stochastic and non-periodic, typically natural textures such as moss, granite, sand, bark, etc. Our algorithm achieves high-quality results comparable to state-of-the-art procedural-noise techniques but is more than 20 times faster.

Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences
Unsupervised Deep Single-Image Intrinsic Decomposition using Illumination-Varying Image Sequences

Louis Lettry (ETH Zürich), Kenneth Vanhoey, Luc Van Gool (ETH Zürich) - Pacific Graphics 2018 / Computer Graphics Forum

Intrinsic Decomposition decomposes a photographed scene into albedo and shading. Removing shading allows to "delight" images, which can then be reused in virtually relit scenes. We propose an unsupervised learning method to solve this problem.

Recent techniques use supervised learning: it requires a large set of known decompositions, which are hard to obtain. Instead, we train on unannotated images by using time lapse imagery gained from static webcams. We exploit the assumption that albedo is static by definition, and shading varies with lighting. We transcribe this into a siamese training for deep learning.

2018-2016

Efficient Rendering of Layered Materials using an Atomic Decomposition with Statistical Operators
Efficient Rendering of Layered Materials using an Atomic Decomposition with Statistical Operators

Laurent Belcour - ACM SIGGRAPH 2018

We derive a novel framework for the efficient analysis and computation of light transport within layered materials. Our derivation consists of two steps. First, we decompose light transport into a set of atomic operators that act on its directional statistics. Speci€cally, our operators consist of reflection, refraction, scattering, and absorption, whose combinations are sufficient to describe the statistics of light scattering multiple times within layered structures. We show that the €first three directional moments (energy, mean and variance) already provide an accurate summary. Second, we extend the adding-doubling method to support arbitrary combinations of such operators efficiently. During shading, we map the directional moments to BSDF lobes. We validate that the resulting BSDF closely matches the ground truth in a lightweight and efficient form. Unlike previous methods, we support an arbitrary number of textured layers, and demonstrate a practical and accurate rendering of layered materials with both an offline and real-time implementation that are free from per-material precomputation.

An Adaptive Parameterization for Material Acquisition and Rendering
An Adaptive Parameterization for Material Acquisition and Rendering

Jonathan Dupuy and Wenzel Jakob (EPFL) - ACM SIGGRAPH Asia 2018

One of the key ingredients of any physically based rendering system is a detailed specification characterizing the interaction of light and matter of all materials present in a scene, typically via the Bidirectional Reflectance Distribution Function (BRDF). Despite their utility, access to real-world BRDF datasets remains limited: this is because measurements involve scanning a four-dimensional domain at sufficient resolution, a tedious and often infeasible time-consuming process. We propose a new parameterization that automatically adapts to the behavior of a material, warping the underlying 4D domain so that most of the volume maps to regions where the BRDF takes on non-negligible values, while irrelevant regions are strongly compressed. This adaptation only requires a brief 1D or 2D measurement of the material’s retro-reflective properties. Our parameterization is unified in the sense that it combines several steps that previously required intermediate data conversions: the same mapping can simultaneously be used for BRDF acquisition, storage, and it supports efficient Monte Carlo sample generation.

Stochastic Shadows
Stochastic Shadows

Eric Heitz, Stephen Hill (Lucasfilm), Morgan McGuire (NVIDIA)

In this paper, we propose a ratio estimator of the direct-illumination equation that allows us to combine analytic illumination techniques with stochastic raytraced shadows while maintaining correctness. Our main contribution is to show that the shadowed illumination can be split into the product of the unshadowed illumination and the illumination-weighted shadow. These terms can be computed separately — possibly using different techniques — without affecting the exactness of the final result given by their product.

This formulation broadens the utility of analytic illumination techniques to raytracing applications, where they were hitherto avoided because they did not incorporate shadows. We use such methods to obtain sharp and noise-free shading in the unshadowed-illumination image and we compute the weighted-shadow image with stochastic raytracing. The advantage of restricting stochastic evaluation to the weighted-shadow image is that the final result exhibits noise only in the shadows. Furthermore, we denoise shadows separately from illumination so that even aggressive denoising only overblurs shadows, while high-frequency shading details (textures, normal maps, etc.) are preserved.

Adaptive GPU Tessellation with Compute Shaders
Adaptive GPU Tessellation with Compute Shaders

Jad Khoury, Jonathan Dupuy, and Christophe Riccio - GPU Zen 2

GPU rasterizers are most efficient when primitives project into more than a few pixels. Below this limit, the Z-buffer starts aliasing, and shading rate decreases dramatically [Riccio 12]; this makes the rendering of geometrically-complex scenes challenging, as any moderately distant polygon will project to sub-pixel size. In order to minimize such sub-pixel projections, a simple solution consists in procedurally refining coarse meshes as they get closer to the camera. In this chapter, we are interested in deriving such a procedural refinement technique for arbitrary polygon meshes.

Real-Time Line- and Disk-Light Shading with Linearly Transformed Cosines
Real-Time Line- and Disk-Light Shading with Linearly Transformed Cosines

Eric Heitz (Unity Technologies) and Stephen Hill (Lucasfilm) - ACM SIGGRAPH Courses 2017

We recently introduced a new real-time area-light shading technique dedicated to lights with polygonal shapes. In this talk, we extend this area-lighting framework to support lights shaped as lines, spheres and disks in addition to polygons.

Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing
Microfacet-based Normal Mapping for Robust Monte Carlo Path Tracing

Vincent Schüssler (KIT), Eric Heitz (Unity Technologies), Johannes Hanika (KIT) and Carsten Dachsbacher (KIT) - ACM SIGGRAPH ASIA 2017

Normal mapping imitates visual details on surfaces by using fake shading normals. However, the resulting surface model is geometrically impossible and normal mapping is thus often considered a fundamentally flawed approach with unavoidable problems for Monte Carlo path tracing: it breaks either the appearance (black fringes, energy loss) or the integrator (different forward and backward light transport). In this paper, we present microfacet-based normal mapping, an alternative way of faking geometric details without corrupting the robustness of Monte Carlo path tracing such that these problems do not arise.

A Spherical Cap Preserving Parameterization for Spherical Distributions
A Spherical Cap Preserving Parameterization for Spherical Distributions

Jonathan Dupuy, Eric Heitz and Laurent Belcour - ACM SIGGRAPH 2017

We introduce a novel parameterization for spherical distributions that is based on a point located inside the sphere, which we call a pivot. The pivot serves as the center of a straight-line projection that maps solid angles onto the opposite side of the sphere. By transforming spherical distributions in this way, we derive novel parametric spherical distributions that can be evaluated and importance-sampled from the original distributions using simple, closed-form expressions. Moreover, we prove that if the original distribution can be sampled and/or integrated over a spherical cap, then so can the transformed distribution. We exploit the properties of our parameterization to derive efficient spherical lighting techniques for both real-time and offline rendering. Our techniques are robust, fast, easy to implement, and achieve quality that is superior to previous work.

A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence
A Practical Extension to Microfacet Theory for the Modeling of Varying Iridescence

Laurent Belcour (Unity), Pascal Barla (Inria) - ACM SIGGRAPH 2017

Thin film iridescence permits to reproduce the appearance of leather. However, this theory requires spectral rendering engines (such as Maxwell Render) to correctly integrate the change of appearance with respect to viewpoint (known as goniochromatism). This is due to aliasing in the spectral domain as real-time renderers only work with three components (RGB) for the entire range of visible light. In this work, we show how to anti-alias a thin-film model, how to incorporate it in microfacet theory, and how to integrate it in a real-time rendering engine. This widens the range of reproducible appearances with microfacet models.

Linear-Light Shading with Linearly Transformed Cosines
Linear-Light Shading with Linearly Transformed Cosines

Eric Heitz, Stephen Hill (Lucasfilm) - GPU Zen (book)

In this book chapter, we extend our area-light framework based on Linearly Transformed Cosines to support linear (or line) lights. Linear lights are a good approximation for cylindrical lights with a small but non-zero radius. We describe how to approximate these lights with linear lights that have similar power and shading, and discuss the validity of this approximation.

A Practical Introduction to Frequency Analysis of Light Transport
A Practical Introduction to Frequency Analysis of Light Transport

Laurent Belcour - ACM SIGGRAPH Courses 2016

Frequency Analysis of Light Transport expresses Physically Based Rendering (PBR) using signal processing tools. It is thus tailored to predict sampling rate, perform denoising, perform anti-aliasing, etc. Many methods have been proposed to deal with specific cases of light transport (motion, lenses, etc). This course aims to introduce concepts and present practical application scenarios of frequency analysis of light transport in a unified context. To ease the understanding of theoretical elements, frequency analysis will be introduced in pair with an implementation.

2016

Real-Time Polygonal-Light Shading with Linearly Transformed Cosines
Real-Time Polygonal-Light Shading with Linearly Transformed Cosines

Eric Heitz, Jonathan Dupuy, Stephen Hill (Ubisoft), David Neubelt (Ready at Dawn Studios) - ACM SIGGRAPH 2016

Shading with area lights adds a great deal of realism to CG renders. However, it requires solving spherical equations that make it challenging for real-time rendering. In this project, we develop a new spherical distribution that allows us to shade physically based materials with polygonal lights in real-time.