Simple GPU Path Tracing, Part. 3.1 : Matte Material
- Simple GPU Path Tracing : Introduction
- Simple GPU Path Tracing, Part. 1 : Project Setup
- Simple GPU Path Tracing, Part. 1.1 : Adding a cuda backend to the project
- Simple GPU Path Tracing, Part. 2.0 : Scene Representation
- Simple GPU Path Tracing, Part. 2.1 : Acceleration structure
- Simple GPU Path Tracing, Part. 3.0 : Path Tracing Basics
- Simple GPU Path Tracing, Part. 3.1 : Matte Material
- Simple GPU Path Tracing, Part. 3.2 : Physically Based Material
- Simple GPU Path Tracing, Part. 3.4 : Small Improvements, Camera and wrap up
- Simple GPU Path Tracing, Part. 4.0 : Mesh Loading
- Simple GPU Path Tracing, Part. 4.1 : Textures
- Simple GPU Path Tracing, Part. 4.2 : Normal Mapping & GLTF Textures
- Simple GPU Path Tracing, Part. 5.0 : Sampling lights
- Simple GPU Path Tracing, Part 6 : GUI
- Simple GPU Path Tracing, Part 7.0 : Transparency
- Simple GPU Path Tracing, Part 7.1 : Volumetric materials
- Simple GPU Path Tracing, Part 7.2 : Refractive material
- Simple GPU Path Tracing, Part 8 : Denoising
- Simple GPU Path Tracing, Part 9 : Environment Lighting
- Simple GPU Path Tracing, Part 10 : Little Optimizations
- Simple GPU Path Tracing, Part 11 : Multiple Importance Sampling
We now have a basic setup for path tracing a scene. It's still not great looking yet, but we will now start to work on improving how we calculate lighting which will greatly improve the visual.
The code for this post will be on this branch of the github repo.
Materials
Emission
Matte BRDF
Diffuse BRDF model
We want to be using the BRDF formula for a diffuse surface, something like EvalBSDF(Incoming, Outgoing). This diffuse BRDF is the simplest form of BRDF, because it's equal for all incoming directions. It doesn't even depend on the outgoing direction.
But while we're at it, we also want the EvalBSDF function to calculate the cosine term of the rendering equation. That's because in some cases, the cosine term might cancel out with terms from the BRDF, so it's a small optimization.
Before we dive in, let's quickly review the concept of importance sampling, that we'll be using in the implementation.
Remember, we want to approximate an integral by sampling the function that we're integrating many times (the rendering equation integral). We could just generate completely random samples and calculate the function at those samples, but that will slowly converge. What we can do instead is try and sample the function where it has a higher value, more often that where it has lower values. We will still be sampling everywhere, but more importantly where the function is high.
However, if we just take more samples where the function is high and average them all, as we did before, we will give as much importance to all samples. So the low probability samples will be way less visible than the high probability samples, which is not good.
Therefore, we will also be weighing each sample result by the probability of choosing this sample. It makes sense if you think about it :
If we sample where the function is high, it means we have a high probability of choosing this sample (Because we sample more where the function is high), meaning that we will sample nearby many times again in the future, so we want the weight of that sample to be low.
If we sample where the function is low, we have low probability of choosing this sample, meaning that will probably not sample a lot around it in the future, so we want the weight of that sample to be high.
How to do that ? We can simply divide the result of the sample by the probability of sampling it.
This will divide by a number close to 1 for likely samples, and by a number close to 0 for unlikely samples.
Now, how do we generate a direction that's more likely to be pointing where our lighting function is high ? That will depend on the BRDF that we're using obviously, but it will also depend on the cosine term.
For now, we use a diffuse brdf, meaning that it is equal for all the directions around the hemisphere. Therefore, we can't really generate a set of directions that will evaluate to higher values for this brdf.
What we can do instead is generate directions that generate higher values for the cosine term. We already have one that does that, and we used in the previous post : SampleHemisphereCosine()
It is more likely to generate directions whose angle with the normal of the surface is small. (i.e the cosine of this 1)
The way this function works is it generates uniform samples on a disk, and project those samples up to a sphere. This is called "Malley's Method" and you can read more about it here.
But how do we generate uniform samples on a disk ? We could draw random values for an angle, and random values for a radius, and use those polar coordinates to get a point, but the distribution would not be uniform :
We see that more samples get generated around the center of the disk, which is not so uniform indeed !
That's because the surface area on a disk increases as we move away from the center :
Therefore, if we pick a uniform number for R, it will look more concentrated towards the center :
Here, we had equal chances to fall into either 3 of the regions inside of that solid angle, and you see that the point density is greater towards the center. To solve this issue, we have to pick the random radius from a non-uniform distribution, and to do that, we take the square root of our original r value. I'm not going to deep dive in the details, but here's a funny and good explanation.
When doing that, we get a nice uniform distribution over the disk :
And to project the points up to the hemisphere, we need to think of that hemisphere from a side view :
we need to find the y coordinate. We know that the y position equals sine(theta), and the position on the x/z plane is cos(theta), which is our R value.
Using the following trigonometry identity, we can solve for y.
we know that cos(theta) is our R value, so sin(theta) = sqrt(1 - R * R), which gives us our Y value!
So that's it for the explanation of the SampleHemisphereCosine() function.
Evaluation :
Here's the content of the Eval function :
As you can see, this does not take the outgoing direction into account : it's perfectly isotropic : it scatters light uniformly in all directions. The abs(dot(Normal, Incoming)) is the cosine term from the rendering equation (The dot product of 2 normalized vectors is equal to the cosine of the angle between the 2 vectors).
Here's a profile of the brdf lobe :
Here, each white line is a sample generated from that function. the length of the line is proportional to the value of the brdf for that direction. You can see that it's higher towards the normal of the shape.
PDF :
Now, Remember that we also need to divide the sample value by the probability of generating a given direction.
To do that, well use SampleBSDFPDF function that takes a vector, and returns the probability of generating this vector based on the brdf.
The probability of generating a direction v is proportional to cos(θ) (it's in the name)
The total probability over the hemisphere should be 1. So, to normalize, we divide by the integral of cos(θ) over the hemisphere, which is π.
Now in the main function, we can do that :
and we get this result :
It's not too different from what we had before, but the calculations are a little more physically correct.
Links
Next Post : Simple GPU Path Tracing, Part. 3.2 : Physically Based Material
Commentaires
Enregistrer un commentaire