Simple GPU Path Tracing, Part 11 : Multiple Importance Sampling

 

In this post I'll be covering the theory and implementation of multiple importance sampling in our path tracer.

Here's the code for this post.

Just as a side note, there's been quite a few changes in the code since the last post : refactoring, bug fixes and GUI, which are not too interesting to write about (Here are the commits).  

There's also been some  more significant changes :

- Added some scene serialization functions to save a scene in a file, which will be quite handy to test the feature we're going to implement today. (commit)

- Added a new 3d model loader based on the Assimp library, which is also quite handy to support a higher variety of files. (commit)

- Integrated the great ImGuizmo library so we're now able to move objects in the scene, which is very cool! (commit)

- Added a render export function to save renders to files (commit)

The code is now a little more cleaner and faster, which is great to get us continuing on our path tracing journey :)

Theory

For now, we use importance sampling to sample the next direction the ray is going to travel. This means we don't generate the next direction completely randomly, instead we try to get directions where the result of the rendering equation that we're trying to solve is be higher.

This does help removing the variance of the monte carlo integration, but there's another technique called multiple importance sampling (or mis, I'll be using this acronym from here on) that will improve the results even more, introduced by Eric Veach in his thesis that is freely available online here.

As said earlier, To get a better monte carlo estimator, we try to draw samples where the function is high. But remember, the rendering equation is the integral of a product, where multiple terms are multiplied. Therefore, the value of the rendering equation can be dictated by any of those terms.

The 2 most important terms are the L and the f terms (Incoming radiance, and brdf), that's why so far, we either sample the lights term, or the brdf term.

This will always cause poor results in some specific cases. 

To illustrate this, we'll be using our own version of the famous Veach MIS test scene (You can download it here), which features multiple planes, each with a different roughness value, and multiple spherical lights of different sizes.

When the material is smooth (Top planes), and we sample only the lights : It works for small surface light, but as soon as the light gets bigger, we get some noisy bad result. Note that it works ok for rougher surfaces on the bottom planes.

On the contrary, when the material is rough and we sample only the bsdf, we almost never generate directions towards the lights, introducing some noise too on the bottom planes. Note that it works ok on the top planes, because it's generating direction in the perfect reflection direction, pointing directly to the lights.

So far, we've been using a combination of those 2 techniques, resulting in that render :

It's still better than the 2 previous one, but still quite noisy. Note that all those images were rendered using 16 samples per pixel.

with MIS, we draw samples from multiple distributions at the same time (as opposed to either/or).

In our case, that's samples from the lights and from the bsdf, hoping that one of the 2 samples will fit the value of the function, but without knowing which one will. 

We then multiply the result of that sample with a MIS weight w that we calculate using the "balance heuristic" function : 

with n_i = number of samples taken from distribution i

p_i the probability of taking sample x from distribution i.

Let's get a sense of why and how this would help reducing the variance : 

if a sample X was taken from a distribution with probability p, and f(X) is low. This would mean that the probability of drawing X from the distribution would be low too, because remember importance sampling draws more samples where the function is high.

So we have a low pdf value, and when we multiply with the other term in the equation g(X), it could be that that other term g is high. When using the importance sampling formula we've been using thus far, we would have something like that :

 

The result of this equation would be quite high, because the denominator pdf(f) would be low, and that would introduce some variance.

If we were to weigh that result using the power heuristic, we would end up with this formula :

 

 If we look at the same case, f(X) was low, and g(X) was high. That meant that pdf_f(X) was low, and pdf_g(x) was high. With this formula, the denominator now includes pdf_g which will be higher, which will balance the result and make it lower.

 in practice, the "power heuristic" is even better at reducing the variance : 


It's almost the same equation, but with a power term that's equal to 2 in practice.

 
You can read a more detailed and probably clearer explanation of all that here.

Implementation

first, let's add our PowerHeuristic function : 
FN_DECL float PowerHeuristic(float PDF0, float PDF1)
{
    return (PDF0 * PDF0) / (PDF0 * PDF0 + PDF1 * PDF1);
}

That's a simple implementation of the formula we saw earlier.

We will only implement MIS for regular materials, i.e not volumetric nor delta. So this will work for PBR and matte materials for now.

We will mainly be replacing the bit of code where we used to sample either the bsdf or the light.

before, we were sampling a direction from those distributions, then evaluating the brdf for that direction, and the pdf of choosing the sampled direction, and multiplying the weight with those values.

Now, when we hit a supported material, we will explicitely sample a direction towards a light in the scene, calculate the radiance coming from that light, and calculate the pdf associated with that direction.

Then, we will evaluate the probability of sampling that direction, but according to the brdf term this time.

we can then calculate our power heuristic value with those 2 pdfs : 

Incoming = SampleLights(Position, RandomUnilateral(Isect.RandomState), RandomUnilateral(Isect.RandomState), Random2F(Isect.RandomState));
if (Incoming == vec3{0, 0, 0}) break;
vec3 BSDFCos   = EvalBSDFCos(Material, Normal, OutgoingDir, Incoming);
float LightPDF = SampleLightsPDF(Position, Incoming);
float BSDFPDF = SampleBSDFCosPDF(Material, Normal, OutgoingDir, Incoming);
float MisWeight = PowerHeuristic(LightPDF, BSDFPDF) / LightPDF;

we can now evaluate the radiance coming from that direction :

if (BSDFCos != vec3(0, 0, 0) && MisWeight != 0)
{
    sceneIntersection Isect = IntersectTLAS(MakeRay(Position, Incoming, 1.0f / Incoming), Sample, 0);
    vec3 Emission = vec3(0, 0, 0);
    if (Isect.Distance == MAX_LENGTH) {
        Emission = EvalEnvironment(Incoming);
    } else {
        materialPoint Material = EvalMaterial(Isect);
        Emission      = EvalEmission(Material, EvalShadingNormal(-Incoming, Isect), -Incoming);
    }
    Radiance += Weight * BSDFCos * Emission * MisWeight;
}

We shoot a ray towards the sampled direction, evaluate the emission, and then add to the radiance, multiplying by the mis weight that we just calculated.

Then, we do the same thing, but sampling the bsdf distribution instead of the lights : 

Incoming = SampleBSDFCos(Material, Normal, OutgoingDir, RandomUnilateral(Isect.RandomState), Random2F(Isect.RandomState));
if (Incoming == vec3{0, 0, 0}) break;
vec3 BSDFCos   = EvalBSDFCos(Material, Normal, OutgoingDir, Incoming);
float LightPDF = SampleLightsPDF(Position, Incoming);
float BSDFPDF = SampleBSDFCosPDF(Material, Normal, OutgoingDir, Incoming);
float MisWeight = PowerHeuristic(BSDFPDF, LightPDF) / BSDFPDF;
if (BSDFCos != vec3(0, 0, 0) && MisWeight != 0) {
    MisIntersection = IntersectTLAS(MakeRay(Position, Incoming, 1.0f / Incoming), Sample, 0);
    vec3 Emission = vec3(0, 0, 0);
    if (Isect.Distance == MAX_LENGTH) {
        Emission = EvalEnvironment(Incoming);
    } else {
        materialPoint Material = EvalMaterial(MisIntersection);
        Emission      = Material.Emission;
    }
    Radiance += Weight * BSDFCos * Emission * MisWeight;
}
UseMisIntersection = true;

Here, note that when we call IntersectTLAS(), we actually store the result in a MisIntersection variable. 

This is a small optimization, we will re-use this intersection when we loop to the next bounce instead of re-shooting the ray in the scene.

We also keep track of a UseMisIntersection variable that will define if that variable was set. It is set to true when hit a material that supports MIS. When we hit a special material, we set this variable to false, and we do our path tracing as before when starting the bounce :

..... 
    for(int Bounce=0; Bounce < GET_ATTR(Parameters, Bounces); Bounce++)
    {
        sceneIntersection Isect = UseMisIntersection ?  MisIntersection : IntersectTLAS(Ray, Sample, Bounce);
.... 

 And that's all, it wasn't too involved !

Here are some comparisons between the different sampling strategies, all on 16 samples per pixel :

 

 You see that we get some quite amazing results with MIS, compared to the previous approach !


Links

 
 

Commentaires

Articles les plus consultés