- Simple GPU Path Tracing : Introduction
- Simple GPU Path Tracing, Part. 1 : Project Setup
- Simple GPU Path Tracing, Part. 1.1 : Adding a cuda backend to the project
- Simple GPU Path Tracing, Part. 2.0 : Scene Representation
- Simple GPU Path Tracing, Part. 2.1 : Acceleration structure
- Simple GPU Path Tracing, Part. 3.0 : Path Tracing Basics
- Simple GPU Path Tracing, Part. 3.1 : Matte Material
- Simple GPU Path Tracing, Part. 3.2 : Physically Based Material
- Simple GPU Path Tracing, Part. 3.4 : Small Improvements, Camera and wrap up
- Simple GPU Path Tracing, Part. 4.0 : Mesh Loading
- Simple GPU Path Tracing, Part. 4.1 : Textures
- Simple GPU Path Tracing, Part. 4.2 : Normal Mapping & GLTF Textures
- Simple GPU Path Tracing, Part. 5.0 : Sampling lights
- Simple GPU Path Tracing, Part 6 : GUI
- Simple GPU Path Tracing, Part 7.0 : Transparency
- Simple GPU Path Tracing, Part 7.1 : Volumetric materials
- Simple GPU Path Tracing, Part 7.2 : Refractive material
- Simple GPU Path Tracing, Part 8 : Denoising
- Simple GPU Path Tracing, Part 9 : Environment Lighting
- Simple GPU Path Tracing, Part 10 : Little Optimizations
- Simple GPU Path Tracing, Part 11 : Multiple Importance Sampling
Today, I just wanted to bring a little improvement to our path tracer that will greatly improve visual quality.
Here are the 2 comits of this post.
Remember from the BRDF posts, we were talking about how we generate the ray directions based on the BRDF shape, so that we generate more samples towards directions where the BRDF is higher.
That's good, but another term that influences greatly the value of the rendering equation is the incoming light (L) term. So a good idea would be to also shoot more rays towards directions that we know a lot of light come from, i.e to shoot more rays towards shapes that have a positive "Emission" value.
To do that, we will need to keep track of all the shapes that have an emissive factor, and we'll build a list of them.
In the path tracer, we will then randomly either shoot a ray based on light sampling, or based on BRDF sampling.
We will then take into account both the PDF for the shooting a ray to the light, and for sampling a direction from the BRDF.
We will create some structs that will allow us to keep track of lights :
struct light
{
int Instance = InvalidID;
int CDFCount = 0;
glm::ivec2 Pad0;
float CDF[MAX_CDF];
};
struct lights
{
glm::uvec3 Pad0;
uint32_t LightsCount = 0;
light Lights[MAX_LIGHTS];
};
Note that we keep track of a CDF, which is a cumulative distribution function.
In probability theory, a distribution function is a increasing function of a random variable X that given x returns the probability that X is less than or equal to x.
We will see how we use it in practice.
Building the lights struct
Once our scene is complete and ready to go in the path tracer, we will fill up a lights struct, and upload it into a gpu buffer.
in application::Init(), we call the GetLights() function and store it as a member of application :
....
BVH = CreateBVH(Scene);
Params = GetTracingParameters();
Lights = GetLights(Scene, Params);
...
And here's the body of that function :
lights GetLights(std::shared_ptr<scene> Scene, tracingParameters &Parameters)
{
lights Lights = {};
// Returns all the lights in a scene.
for (size_t i = 0; i < Scene->Instances.size(); i++)
{
//Check if the object is emitting
const instance &Instance = Scene->Instances[i];
const material &Material = Scene->Materials[Instance.Material];
if(Material.Emission == glm::vec3{0,0,0}) continue;
//Check if the object contains geometry
const shape &Shape = Scene->Shapes[Instance.Shape];
if(Shape.Triangles.empty()) continue;
//Initialize the light
light &Light = AddLight(Lights);
Light.Instance = i;
//Calculate the cumulated distribution function for the primitive,
//Which is essentially the cumulated area of the shape.
if(!Shape.Triangles.empty())
{
Light.CDFCount = Shape.Triangles.size();
for(size_t j=0; j<Light.CDFCount; j++)
{
const glm::ivec3 &Tri = Shape.Triangles[j];
Light.CDF[j] = TriangleArea(Shape.Positions[Tri.x], Shape.Positions[Tri.y], Shape.Positions[Tri.z]);
if(j != 0) Light.CDF[j] += Light.CDF[j-1];
}
}
}
return Lights;
}
Calculating a CDF will come in useful when we want to sample an emissive shape. We will want to generate a ray towards that shape. To do that, we pick a triangle on that shape, and inside this triangle, we pick a point.
But we don't want to take just a random triangle, because imagine if a shape has a massive triangle on it and lots of small ones, and we take a random triangle uniformly, we would have a result similar to this :
And this is not desirable, because we only sampled the 2 big triangles twice, whereas they participate way more in the lighting than all the other small triangles.
What we want to sample that big triangle more often, because it will participate more in lighting the scene than the other small ones, wand we will do that using the CDF.
The way we calculate the CDF is by adding the area of each triangle to the previous one. That makes sense, because bigger triangles should have more chance of being sampled than small triangles.
Note that this is not a normalized CDF, meaning that it doesn't sum to 1. We will apply a normalization operation when we actually sample from it.
Now, we'll create a lights gpu buffer in the InitGpuObjects() function :
#if API==API_GL
PathTracingShader = std::make_shared<shaderGL>("resources/shaders/PathTrace.glsl");
TonemapShader = std::make_shared<shaderGL>("resources/shaders/Tonemap.glsl");
RenderTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);
TonemapTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);
TracingParamsBuffer = std::make_shared<uniformBufferGL>(sizeof(tracingParameters), &Params);
MaterialBuffer = std::make_shared<bufferGL>(sizeof(material) #
* Scene->Materials.size(), Scene->Materials.data());
LightsBuffer = std::make_shared<bufferGL>(sizeof(lights), &Lights);
#elif API==API_CU
TonemapTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);
RenderBuffer = std::make_shared<bufferCu>(Window->Width * Window->Height
* 4 * sizeof(float));
TonemapBuffer = std::make_shared<bufferCu>(Window->Width * Window->Height
* 4 * sizeof(float));
RenderTextureMapping = CreateMapping(TonemapTexture);
TracingParamsBuffer = std::make_shared<bufferCu>(sizeof(tracingParameters), &Params);
MaterialBuffer = std::make_shared<bufferCu>(sizeof(material)
* Scene->Materials.size(), Scene->Materials.data());
LightsBuffer = std::make_shared<bufferCu>(sizeof(lights), &Lights);
#endif
and as usual, we pass this buffer to the kernels, for opengl :
PathTracingShader->SetSSBO(LightsBuffer, 10);
and for cuda, we add it to the kernel arguments, and as usual we also declare it in the global scope and add it in the INIT() macro.
we're now ready to use that in our path tracer !
Sampling lights
Here's how we will be sampling the next direction from now on :
vec3 Incoming = vec3(0);
if(RandomUnilateral(Isect.RandomState) < 0.5f)
{
Incoming = SampleBSDFCos(Material, Normal, OutgoingDir,
RandomUnilateral(Isect.RandomState), Random2F(Isect.RandomState));
}
else
{
Incoming = SampleLights(Position, RandomUnilateral(Isect.RandomState),
RandomUnilateral(Isect.RandomState), Random2F(Isect.RandomState));
}
if(Incoming == vec3(0,0,0)) break;
Weight *= EvalBSDFCos(Material, Normal, OutgoingDir, Incoming) /
vec3(0.5 * SampleBSDFCosPDF(Material, Normal, OutgoingDir, Incoming) +
0.5f * SampleLightsPDF(Position, Incoming));
So as I said in the intro, we either sample the BSDF, or a light in the scene.
Then, when we evaluate the BSDF, we divide by both pdfs of the sampled direction, and divide each by 2.
Let's now see how to generate a direction towards a light in the scene in SampleLights :
FN_DECL vec3 SampleLights(INOUT(vec3) Position, float RandL, float RandEl, vec2 RandUV)
{
// Take a random light index
int LightID = SampleUniform(int(LightsCount), RandL);
LightID = 0;
// Returns a vector that points to a light in the scene.
if(Lights[LightID].Instance != INVALID_ID)
{
bvhInstance Instance = TLASInstancesBuffer[Lights[LightID].Instance];
indexData IndexData = IndexDataBuffer[Instance.MeshIndex];
uint TriangleStartInx = IndexData.triangleDataStartInx;
uint TriangleCount = IndexData.TriangleCount;
// Sample an element on the shape
int Element = SampleDiscrete(LightID, RandEl);
// // Sample a point on the triangle
vec2 UV = TriangleCount > 0 ? SampleTriangle(RandUV) : RandUV;
// // Calculate the position
triangle Tri = TriangleBuffer[TriangleStartInx + Element];
vec3 LightPos =
Tri.v1 * UV.x +
Tri.v2 * UV.y +
Tri.v0 * (1 - UV.x - UV.y);
LightPos = TransformPoint(Instance.Transform, LightPos);
// return the normalized direction
return normalize(LightPos - Position);
}
else
{
return vec3(0,0,0);
}
}
So we first pick a number uniformly to choose what emissive shape we're going to use for sampling.
Then, we sample one of the triangles inside the shape, based on the cumulative distribution function (Remember, it's based on the area of each triangle).
We then pick a random point on that chosen triangle, and generate a direction from the hit position to that sampled position.
and here are the sampling routines that are used in this function :
FN_DECL vec2 SampleTriangle(vec2 UV){
return vec2(
1 - sqrt(UV.x),
UV.y * sqrt(UV.x)
);
}
This function generates barycentric coordinates that will be uniformly distributed over the triangle area.
FN_DECL int SampleUniform(int Size, float Rand)
{
// returns a random number inside the range (0 - Size)
return clamp(int(Rand * Size), 0, Size-1);
}
This function simply maps a random number between 0-1 to 0-Size.
FN_DECL int SampleDiscrete(int LightInx, float R)
{
//Remap R from 0 to the size of the distribution
float LastValue = Lights[LightInx].CDF[Lights[LightInx].CDFCount-1];
R = clamp(R * LastValue, 0.0f, LastValue - 0.00001f);
// Returns the first element in the array that's greater than R.
int Inx= UpperBound(CDFStart, CDFCount, R);
Inx -= CDFStart;
return clamp(Inx, 0, CDFCount-1);
}
This samples a CDF. It takes an input random number, and returns the first index in the CDF buffer whose probability is higher than R.
Great, now we just need to get the pdf of generating a given direction on a light, and here's how we do it :
FN_DECL float SampleLightsPDF(INOUT(vec3) Position, INOUT(vec3) Direction)
{
// Initialize the pdf to 0
float PDF = 0.0f;
// Loop through all the lights
for(int i=0; i<LightsCount; i++)
{
if(Lights[i].Instance != INVALID_ID)
{
float LightPDF = 0.0f;
// Check if the ray intersects the light. If it doesn't, we break.
ray Ray;
Ray.Origin = Position;
Ray.Direction = Direction;
sceneIntersection Isect;
Isect.Distance = 1e30f;
IntersectInstance(Ray, Isect, Lights[i].Instance);
if(Isect.Distance == 1e30f) continue;
mat4 InstanceTransform = TLASInstancesBuffer[Lights[i].Instance].Transform;
//Get the point on the light
triangle Tri = TriangleBuffer[Isect.PrimitiveIndex];
vec3 LightPos =
Tri.v1 * Isect.U +
Tri.v2 * Isect.V +
Tri.v0 * (1 - Isect.U - Isect.V);
LightPos = TransformPoint(InstanceTransform, LightPos);
triangleExtraData ExtraData = TriangleExBuffer[Isect.PrimitiveIndex];
vec3 LightNormal =
ExtraData.Normal1 * Isect.U +
ExtraData.Normal2 * Isect.V +
ExtraData.Normal0 * (1 - Isect.U - Isect.V);
LightNormal = TransformDirection(InstanceTransform, LightNormal);
//Find the probability that this point was sampled
float Area = Lights[i].CDF[Lights[i].CDFCount-1];
LightPDF += DistanceSquared(LightPos, Position) /
(abs(dot(LightNormal, Direction)) * Area);
//Continue for the next ray
PDF += LightPDF;
}
}
// Multiply the PDF with the probability to pick one light in the scene.
PDF *= SampleUniformPDF(int(LightsCount));
return PDF;
}
So to find the pdf of sampling a light direction, according to importance sampling theory, we need to calculate the pdf for each light, and then divide by the light count.
So, for each light, we will be checking if the generated ray could have hit this that light shape.
If it does hit, we calculate the probability that the given point was sampled on the triangle, based on the area and orientation of the triangle compared to the ray direction.
And that's it ! This will immensely help the path tracer converge more rapidly, and there's the difference between before and after with only 32 samples :
Links
Commentaires
Enregistrer un commentaire