Skin RenderingGPU GraphicsGary J. Katz
University of Pennsylvania CIS 665
Adapted from David Gosselin’s Power Point and article, Real-time skin rendering, ShaderX article
Overview Background Offline rendering Texture space lighting Blur and dilation Shadows Specular with shadows
Why Skin is Hard Most lighting from skin comes from sub-surface
scattering Skin color mainly from epidermis Pink/red color mainly from blood
in dermis Lambertian model designed for
“hard” surfaces with littlesub-surface scattering so itdoesn’t work real well for skin
Rough skin Cross Section
Air
Epidermis
Dermis
Incoming Light Outgoing Light
Offline Rendering Approach fromMatrix Reloaded Render a 2D light map Simulate subsurface diffusion in image domain
(different for each color component) The illumination of skin is rendered to a texture map Map is blended to simulate the effect of subsurface
scattering Used traditional ray tracing for areas where light
can pass all the way through (Ears)
Render a 2D light map Simulate subsurface diffusion in image domain
(different for each color component) The illumination of skin is rendered to a texture map Map is blended to simulate the effect of subsurface
scattering Used traditional ray tracing for areas where light
can pass all the way through (Ears)
Algorithm1. Create shadow map from
the point of view of the key light
2. Render diffuse illumination to a 2D light map
3. Dilate the boundaries and blur the light map to approximate subsurface scattering
4. Render final mesh(use the blurred light map for diffuse illumination)
Geometry
Write distance from light into shadow map
Light in texture space
Sample texture space light
BackBuffer
Blur
Structure for texture space algorithm
From “The Matrix Reloaded”
From Sushi Engine (ATI)
Texture Space Subsurface Scattering
Current skin in Real Time
Creating the Shadow Map1. Compute view-projection matrix to render from
point of view of light Use a frustum that tightly bounds the bounding sphere of
the geometry
2. Use bounding sphere of head to ensure the most texture space is used
3. Store the depth values in the alpha channel
4. Vertex shader passes depth to the pixel shader.
5. Pixel Shader outputs interpolated depth
Creating the Shadow Map Translucent Shadows
Use a second pass
1. Render translucent geometry1. Have z-testing turned on
2. Have z-writing turned off
2. Accumulate opacity of the samples on the RGB channels of the shadow map texture using additive blending
Computing Shadows from Map If the sample is not shadowed by a translucent
object its diffuse contribution is attenuated by the value in the alpha channel
Opaque geometry shadows itself Translucent geometry shadows opaque geometry
Shadow Map and Shadowed Lit Texture
Shadow Map(depth)
Shadows in Texture Space
Render to Light MapThe Algorithm Render diffuse lighting into an off-screen
texture using texture coordinates as position Blur the off-screen diffuse lighting Read the texture back and add specular
lighting in subsequent pass Only use bump map for the specular lighting
pass
Rendering Diffuse IlluminationCreating the 2D Light Map
Vertex Shader Sets the output positions to be the texture coordinates of the vertices
1. Moves Texture Coordinates from [0,1] to [-1, 1]
2. Computes position of the vertex from point of view of light in the resized [0,1] space and passes result to pixel shader for depth test
Example Light Map
// Vertex Shader Codeo.pos.xy = i.texCoord*2.0 – 1.0;o.pos.z = 0.0;o.pos.w = 1.0;
Texture Coordinates as Position Need to light as a 3D model
but draw into texture
By passing texture coordinates as “position” the rasterizer does the unwrap
Compute light vectors based on 3D position and interpolate
Rendering Diffuse IlluminationCreating the 2D Light Map
Pixel Shader Computes diffuse contribution of light that is attenuated by the shadow
Example Light Map
// Pixel Shader Code
// if light is not in shadowIf(lightPos.z < textureMap[x,y] && dot(Normal,Light) > 0) {
// compute translucency alpha = textureMap[x,y]^ShadowCoeficient
shadowFactor = lerp(occluded, white, alpha);}else shadowFactor = occluded;
diffuse = shadowFactor * saturate(dot(Normal, Light) * lightColor)
Blur Used to simulate the subsurface component of skin
lighting Used a grow-able Poisson disc filter Read the kernel size from a texture Allows varying the subsurface effect
Higher for places like ears/nose Lower for places like cheeks
Blur Size Map and Blurred Lit Texture
Created by the artist
BlurAll work is done in the pixel shader1. Poisson distribution is created
(hard-coded in shader)
2. Read center sample and blur kernel size from alpha channel
3. Scale other samples by the kernel size4. Sum the samples and divide by the kernel size5. Set the alpha channel with the blur size to allow for
further blurring passes
Texture Boundary DilationThe Problem:
Boundary artifacts are create when fetching from the light map
Solution:Dilate the texture prior to blurring
Put image from textbook here
Texture Boundary Dilation Modify the Poisson disk filter shader Check whether a given sample is just outside
the boundary of useful data If outside, copy from an interior neighboring
sample instead More expensive, but only used in first
blurring pass
Specular Use bump map for specular lighting Per-pixel exponent Need to shadow specular
Hard to blur shadow map directly Modulate specular from shadowing light by luminance of texture
space light Darkens specular in shadowed areas but preserves lighting in
unshadowed areas// compute luminance of light map samplefloat lum = dot(float3(.2125, .7154, .0721), cLightMap.rgb);
// possibly scale and bias lum here depending on the light setup
// multiply specular by lumspecular *= lum;
Specular Shadow Dim Results
Specular Without Shadows Specular With Shadows
Acceleration Techniques Frustrum Culling
Before rendering the light map clear z-buffer to 1 Perform texture space rendering pass where the z value is
set to 0 for all rendered samples On all further texture space passes
set the z value to 0 in the vertex shader Set the z test to ‘equal’
If the model lies outside the view frustrum the hardware early-z culling will prevent the pixels from being processed
Acceleration Techniques Backface Culling
Dot product of view vector and normal is computed in vertex shader If frontfacing a 0 is written to the z-buffer Else pixel is clipped, leaving z-buffer untouched Note: bias result of dot product by .3 to have slightly backfaced
samples not culled
Insert image from textbook here
Acceleration Techniques Distance Culling
If model is very far away from camera Z value is set to 1 Light map from previous rendered frame is used Specular illumination is still computed
All acceleration techniques can be performed in one pass
Demo
Top Related