Face Rendering: Specularity, Subsurface Scattering, & Raytracing
Part 3 - Raytracing vs. raymarching
As the 3rd part of my quest for human skin rendering domination, I focused on researching the rendering methods of raytracing and raymarching and how one would implement them in Unreal Engine 4.
This technique effectively shoots rays through each pixel and tests them at intervals within a domain called a Signed Distance Field (SDF). For skin rendering this technique allows us to determine the amount of material that light has passed through to reach that point on a surface allowing us to better approximate the amount of of light penetration. It has the added benefit of being possible to implement in a pixel shader.
This bit about shading models is doubly important, as it is also relevant for implementing ray tracing.
From the research that I have been able to do by snooping in the UE4 source code it seems that most of the ray traced effects in current versions of UE4 (4.24.1 as of this writing) use a common ray format that is a finely tuned package of data that can be moved and edited as fast as possible. Since it would be limiting for Epic Games to make these shaders GPU specific forever I conclude that this approach could be made hardware agnostic, it is simply limited to Nvidia’s top of the line GPUs at the moment for playability rather than because of actual hardware specificity.
I believe that a method for ray traced subsurface scattering could be achieved in UE4 by recreating or modifying the already present ray tracing techniques in UE4.24. A technique like that used by EA SEED (see Part 2 of this series) could then be used in UE4 but would require a new shading model to allow for lighting calculations to be performed at the hit locations in preparation for a final gather. This would also possibly require a denoise of the gathered luminances before the application of specularity.
Part 2 - Subsurface Scattering
After getting some direction from an expert in the field, I started work on SubSurface Scattering(SSS). I looked into a few different techniques including: sampling a cubemap and applying SSS with pre-integrated falloff sampling, and ray marching.
Cube-Map Sampling
Originally I had gone about SSS by sampling the scene texture, which allowed me to get my head around the problem but had many issues: it forced the object to be in the translucent queue, only took into account light that was behind the object from the camera’s perspective and removed shadow casting by the object. After that I began implementing a version which used a cube-map scene capture.
Ray Marching
To get rid of this bleed I believe the most accurate method would be to use a ray trace to sample the luminance, thickness, and distance to that point on the mesh opposite the pixel normal being rendered.
float accumdist = 0;
float3 localcamvec = normalize( mul(CameraVector, GetPrimitiveData(Parameters.PrimitiveId).WorldToLocal) );
float StepSize = 1 / MaxSteps;
for (int i = 0; i < MaxSteps; i++)
{
float cursample = Texture2DSample(Tex, TexSampler, saturate(CurrentPos).xy ).r;
accumdist += cursample * StepSize;
CurrentPos += -localcamvec * StepSize;
}
return accumdist;
The above snippet is HLSL modified from Ryan Brucks’ series on Ray Marching. This code performs similar to a ray trace but uses interval testing to determine the thickness of media along a ray based on the number of tests that occur inside the media. This is instead of testing if/when it hits and performing more expensive computations as in traditional ray tracing algorithms. I used ray-marching even though UE4 has added real-time ray tracing because of the limitations of my at-home hardware setup. My hope is that by garnering an understanding of SSS algorithms using ray-marching I can later apply the same concepts with UE4’s ray tracing support.
I moved forward with the idea of partially emulating EA SEED’s work in using real-time ray tracing for translucency (See gallery at left). I figured I could use a similar approach to theirs but replace the ray tracing with ray marching.
With all of this research and testing I began to formulate a hypothetical method that I could then work on implementing. I decided that a hybrid approach might produce the best results. From pre-integrated sss we get fine detail that is easy to apply in the pixel shader. From ray marching we get correct luminance at intersecting points. Below I have written out the method I have hypothesized and begun work on. The biggest issue currently is writing a new shading model. The UE4 Editor does not take easily to such things, and it is a much less documented feature/workaround of the engine.
Potential Difficulties
By virtue of being an amalgamation of several techniques, this proposal is quite complex. It also presents the need for writing a new shading model within UE4 in order to perform lighting calculations ourselves; not impossible task though it does require a re-implementation of a physically based shading model specifically for translucent media.
In conclusion
I will be continuing work on this but wanted to give an update. I’ve learned and refreshed on a whole lot of graphics math which is super fun. At any rate thank you for reading!
Part 1 - Specularity
Breakdown
Through this research, the lion’s share of which was informed by Next Generation Character Rendering as well as An Intro to Real-Time Subsurface Scattering, I felt that the most portable part of the many techniques used for face rendering was specularity. I came to this conclusion after seeing how subsurface scattering was most helpful in more dynamic lighting, whereas specularity was clearly visible in all situations. And, like how SSS hints at the underlying structure of skin, specularity approximates the surface; specifically the dull layer of the epidermis, & the shiny layer of oil that sits atop it. We do this with a 2 lobe specular formula.
And above you can see where I lost a bit of steam. The “pores” are not contributing much to reducing the “plastic” feel of the skin. Getting 2 lobe specularity figured out is a challenge, and one that I haven’t surmounted just yet. But I am working on how one could render a lot of the detail of skin, specifically microstructures, with just proceduralism. My theory is that if one gathered the base color of someone’s skin from pictures, a la photogrammetry, and then overlaid high density procedural microstructure speculars you could get a good realtime digital representation of someone’s face. To be continued…