Los Angeles Convention Center, West Hall B
Graphics in games are continuously improving, due to the parallel development of both hardware and software. Still, for a 60-fps game, everything contained in a frame must be computed in just about 16 milliseconds. Given this tight time budget, certain effects cannot be computed and are simply not simulated, sacrificing realism to reallocate resources to other aspects of the game. This demo shows two techniques that aim to improve real-time rendering of humans.
The first one enables simulation of subsurface scattering in just over one millisecond per frame, making it a practical option for even the most challenging game scenarios. Previous real-time approaches simulate it by approximating the non-separable diffusion kernel using a sum of Gaussians, which required several (usually six) 1D convolutions. Instead, this approach decomposes the exact 2D diffusion kernel with only two 1D functions. So subsurface scattering can be rendered with only two convolutions, reducing both time and memory without a decrease in quality.
The demo also shows recent advances in photorealistic eye rendering, including realistic reflections, view and light refraction, caustics, ambient occlusion, assets modelling, and tear-fluid representation.
Universidad de Zaragoza