Simple GPU Path Tracing, Part. 3.1 : Matte Material

 

We now have a basic setup for path tracing a scene. It's still not great looking yet, but we will now start to work on improving how we calculate lighting which will greatly improve the visual.


The code for this post will be on this branch of the github repo.

Materials

 Let's add a material system to the application. 
We will create a material structure : 

struct material
{
    glm::vec3 Emission = {};
    float Padding0;
   
    glm::vec3 Colour = {};
    float Padding1;
   
    int MaterialType = 0;
    glm::ivec3 Padding2;
};

We can then add a material index to the instance struct : 


struct instance
{
    glm::vec3 Position;
    glm::vec3 Rotation;
    glm::vec3 Scale = glm::vec3(1);
    int Shape = InvalidID;
    int Material = InvalidID;
    glm::mat4 GetModelMatrix() const;
};

And add a list of materials into the scene : 
struct scene
{
   
    std::vector<camera> Cameras = {};
    std::vector<instance> Instances = {};
    std::vector<shape> Shapes = {};
    std::vector<material> Materials = {};
   
    std::vector<std::string> CameraNames = {};
    std::vector<std::string> InstanceNames = {};
    std::vector<std::string> ShapeNames = {};
    std::vector<std::string> MaterialNames = {};
.... 

Ok, now we can create some materials in the CreateCornellBox() function :

Here's just an example, but we add materials to all the instance, and also add to the name array for each material.
    Scene->Shapes.emplace_back();
    shape& LeftWallShape       = Scene->Shapes.back();
    LeftWallShape.Positions   = {{-1, 0, 1}, {-1, 0, -1}, {-1, 2, -1}, {-1, 2, 1}};
    LeftWallShape.Triangles   = {{0, 1, 2}, {2, 3, 0}};
    LeftWallShape.TexCoords = {{0, 1}, {1, 1}, {1, 0}, {0, 0}};
    auto& LeftWallMaterial    = Scene->Materials.emplace_back();
    LeftWallMaterial.Colour    = {0.63, 0.065, 0.05f};    
    Scene->Instances.emplace_back();
    auto& LeftWallInstance    = Scene->Instances.back();
    LeftWallInstance.Shape    = (int)Scene->Shapes.size() - 1;
    LeftWallInstance.Material = (int)Scene->Materials.size() - 1;
    Scene->ShapeNames.push_back("LeftWall");
    Scene->InstanceNames.push_back("LeftWall");
    Scene->MaterialNames.push_back("LeftWall");

Now, we need to change the BVH construction to take this into account : 

First, we add MaterialIndex to the bvhInstance struct and add it into the constructor : 
bvhInstance(std::vector<mesh*> *Meshes, uint32_t MeshIndex, glm::mat4 Transform,  
           uint32_t Index, uint32_t MaterialIndex) : MeshIndex(MeshIndex), 
            Index(Index), MaterialIndex(MaterialIndex
 
We then pass the materialIndex to this function in CreateBVH : 
    //Build the array of instances
    for(size_t i=0; i<Scene->Instances.size(); i++)
    {
        Result->Instances.push_back(
            bvhInstance(&Result->Meshes, Scene->Instances[i].Shape,
                        Scene->Instances[i].GetModelMatrix(), (uint32_t)i
                        Scene->Instances[i].Material)
        );
    }
 
Now, we need to build a materials buffer and pass it to the kernel :
inside of InitGpuObjects(), for openGL :
MaterialBuffer = std::make_shared<bufferGL>(sizeof(material) * Scene->Materials.size(),
                                         Scene->Materials.data());
 and for cuda :
MaterialBuffer = std::make_shared<bufferCu>(sizeof(material) * Scene->Materials.size(),  
                                            Scene->Materials.data());
We then pass those buffers to the kernels as usual.
 
In the PathTracingCode, we now need to evaluate the material at the hit point. 
To do that, we add MaterialIndex to the sceneIntersection struct : 


struct sceneIntersection
{
    float Distance;
    float U;
    float V;
   
    uint InstanceIndex;
    uint PrimitiveIndex;
   
    mat4 InstanceTransform;

    vec3 Normal;
    randomState RandomState;
    uint MaterialIndex;

};
 
When we test the intersection with a blas, we ge the material index using the TLASInstancesBuffer : 
uint MaterialIndex = TLASInstancesBuffer[InstanceIndex].MaterialIndex;
and we pass it to RayTriangleIntersection :
FN_DECL void RayTriangleInteresection(ray Ray, triangle Triangle,  
    INOUT(sceneIntersection) Isect, uint InstanceIndex, uint PrimitiveIndex,         
    uint MaterialIndex)
 
and then in the ray triangle intersection, we can set the MaterialIndex field of the intersection object : 
    if(t > 0.0000001f && t < Isect.Distance) {
        Isect.U = u;
        Isect.V = v;
        Isect.InstanceIndex = InstanceIndex;
        Isect.Distance = t;
        Isect.PrimitiveIndex = PrimitiveIndex;
        Isect.MaterialIndex = MaterialIndex;
    }

 
We now define the EvalMaterial function, that just builds a materialPoint struct that will contain informations about the hit material. It is different than the material struct, because it will contain the material data for the hit point that might change over the surface of the shape (with texturing for example, the colour is changing over the surface, and materialPoint will return the exact material colour for a given point) :

FN_DECL materialPoint EvalMaterial(INOUT(sceneIntersection) Isect)
{
    material Material = Materials[Isect.MaterialIndex];
    materialPoint Point;
    Point.Colour = Material.Colour;
    Point.Emission = Material.Emission;
    return Point;
}
 and we call this function after we hit something, and use it in the Weight modulation.

...                    
vec3 Position = Tri.v1 * Isect.U + Tri.v2 * Isect.V + Tri.v0 * (1 - Isect.U - Isect.V);
materialPoint Material = EvalMaterial(Isect);
Weight *= vec3(Material.Colour); 
... 


...And here's the result ! not too dissimilar to what we had before, for now.

Emission

We also now have an emissive material in the scene, the light that's on the ceiling.
We can now account for this emission in the code, so that it can light up the scene : 
Luckily that's a very easy thing to do, we just add to the emission value of the hit material to our Radiance variablem as we did for the sky :
materialPoint Material = EvalMaterial(Isect);
Radiance += Weight * Material.Emission;
Weight *= vec3(Material.Colour);
 
And here's the result : 

 
Ok, it's starting to look better now !

Matte BRDF

Now, it's time to get a bit more physically correct in our calculations of lighting.
We've been using a simplification of the rendering equation, but there are 2 things we need to do to improve our results : 
- Using a proper BRDF in the lighting calculation
- Using the cosine term we mentionned in the previous post
- Generate rays according to the BRDF we're using.

Diffuse BRDF model

We'll be using a diffuse (or lambertian) brdf model for now. 
instead of doing this : 
Weight *= vec3(Material.Colour);

We want to be using the BRDF formula for a diffuse surface, something like EvalBSDF(Incoming, Outgoing). This diffuse BRDF is the simplest form of BRDF, because it's equal for all incoming directions. It doesn't even depend on the outgoing direction.

But while we're at it, we also want the EvalBSDF function to calculate the cosine term of the rendering equation. That's because in some cases, the cosine term might cancel out with terms from the BRDF, so it's a small optimization.

Before we dive in, let's quickly review the concept of importance sampling, that we'll be using in the implementation.

Remember, we want to approximate an integral by sampling the function that we're integrating many times (the rendering equation integral). We could just generate completely random samples and calculate the function at those samples, but that will slowly converge. What we can do instead is try and sample the function where it has a higher value, more often that where it has lower values. We will still be sampling everywhere, but more importantly where the function is high. 

However, if we just take more samples where the function is high and average them all, as we did before, we will give as much importance to all samples. So the low probability samples will be way less visible than the high probability samples, which is not good.

Therefore, we will also be weighing each sample result by the probability of choosing this sample. It makes sense if you think about it : 

If we sample where the function is high, it means we have a high probability of choosing this sample (Because we sample more where the function is high), meaning that we will sample nearby many times again in the future, so we want the weight of that sample to be low. 

If we sample where the function is low, we have low probability of choosing this sample, meaning that will probably not sample a lot around it in the future, so we want the weight of that sample to be high. 

How to do that ? We can simply divide the result of the sample by the probability of sampling it.

This will divide by a number close to 1 for likely samples, and by a number close to 0 for unlikely samples.

Now, how do we generate a direction that's more likely to be pointing where our lighting function is high ? That will depend on the BRDF that we're using obviously, but it will also depend on the cosine term.

For now, we use a diffuse brdf, meaning that it is equal for all the directions around the hemisphere. Therefore, we can't really generate a set of directions that will evaluate to higher values for this brdf.

What we can do instead is generate directions that generate higher values for the cosine term. We already have one that does that, and we used in the previous post : SampleHemisphereCosine()

It is more likely to generate directions whose angle with the normal of the surface is small. (i.e the cosine of this 1)

FN_DECL vec3 SampleHemisphereCosine(vec3 Normal, vec2 UV)
{
    float R = sqrt(UV.y);
    float Z = sqrt(1 - R * R);
 
    float Phi = 2 * PI_F * UV.x;
    vec3 LocalDirection = vec3(R * cos(Phi), R * sin(Phi), Z);    
    return TransformDirection(BasisFromZ(Normal), LocalDirection);
}

The way this function works is it generates uniform samples on a disk, and project those samples up to a sphere. This is called "Malley's Method" and you can read more about it here.

But how do we generate uniform samples on a disk ? We could draw random values for an angle, and random values for a radius, and use those polar coordinates to get a point, but the distribution would not be uniform : 

 

We see that more samples get generated around the center of the disk, which is not so uniform indeed !

That's because the surface area on a disk increases as we move away from the center : 

 

Therefore, if we pick a uniform number for R, it will look more concentrated towards the center : 

 

Here, we had equal chances to fall into either 3 of the regions inside of that solid angle, and you see that the point density is greater towards the center. To solve this issue, we have to pick the random radius from a non-uniform distribution, and to do that, we take the square root of our original r value. I'm not going to deep dive in the details, but here's a funny and good explanation.

When doing that, we get a nice uniform distribution over the disk : 

 

And to project the points up to the hemisphere, we need to think of that hemisphere from a side view : 


we need to find the y coordinate. We know that the y position equals sine(theta), and the position on the x/z plane is cos(theta), which is our R value.

Using the following trigonometry identity, we can solve for y.

 

we know that cos(theta) is our R value, so sin(theta) = sqrt(1 - R * R), which gives us our Y value!

So that's it for the explanation of the SampleHemisphereCosine() function.

Evaluation :

Weight *= EvalBSDFCos(Material, Normal, OutgoingDir, Incoming);

 Here's the content of the Eval function : 

return Colour / vec3(PI_F) * abs(dot(Normal, Incoming));

 As you can see, this does not take the outgoing direction into account : it's perfectly isotropic : it scatters light uniformly in all directions. The abs(dot(Normal, Incoming)) is the cosine term from the rendering equation (The dot product of 2 normalized vectors is equal to the cosine of the angle between the 2 vectors).

Here's a profile of the brdf lobe :

Here, each white line is a sample generated from that function. the length of the line is proportional to the value of the brdf for that direction. You can see that it's higher towards the normal of the shape.


PDF :

Now, Remember that we also need to divide the sample value by the probability of generating a given direction.

To do that, well use SampleBSDFPDF function that takes a vector, and returns the probability of generating this vector based on the brdf.

FN_DECL float SampleHemisphereCosinePDF(INOUT(vec3) Normal, INOUT(vec3) Direction)
{
    float CosW = dot(Normal, Direction);
    return (CosW <= 0) ? 0 : CosW / PI_F;
}

The probability of generating a direction v is proportional to cos(θ) (it's in the name)
The total probability over the hemisphere should be 1. So, to normalize, we divide by the integral of cos(θ) over the hemisphere, which is π.

Now in the main function, we can do that : 

Weight *= EvalBSDFCos(Material, Normal, OutgoingDir, Incoming) /
                          SampleBSDFCosPDF(Material, Normal, OutgoingDir, Incoming);

and we get this result : 

 

It's not too different from what we had before, but the calculations are a little more physically correct. 

Links

 

Next Post : Simple GPU Path Tracing, Part. 3.2 : Physically Based Material

Commentaires

Articles les plus consultés