Simple GPU Path Tracing, Part. 3.4 : Small Improvements, Camera and wrap up

 

We now have a pretty solid base for our path tracer.

I just want to add a way of moving the camera so we can better explore our scene, and also improve our camera model.

Also, there's a few simple things we can add to improve the quality of our renders, so we'll do that too.


 

The code for this post will be on this branch of the github repo.

Camera Controller

We will add a way of moving the camera in the scene with the mouse. We will implement a very simple orbital camera controller.

Here's the core of the camera controller code : 

bool orbitCameraController::Update()
{
    if(Locked) return false;
    ImGuiIO &io = ImGui::GetIO();

    bool ShouldRecalculate=false;
    if(io.MouseDownDuration[0]>0 && io.KeyShift)
    {
        float Offset = io.MouseDelta.y * 0.001f * this->MouseSpeedWheel * this->Distance;
        this->Distance -= Offset;
        if(Distance < 0.1f) Distance = 0.1f;
        ShouldRecalculate=true;
    }
    else if(io.MouseDownDuration[0]>0)
    {
        this->Phi += io.MouseDelta.x * 0.001f * this->MouseSpeedX;
        this->Theta -= io.MouseDelta.y * 0.001f * this->MouseSpeedY;
        ShouldRecalculate=true;
    }
    else if(io.MouseDownDuration[1]>0)
    {
        glm::vec3 Right = glm::column(ModelMatrix, 0);
        glm::vec3 Up = glm::column(ModelMatrix, 1);

        this->Target -= Right * io.MouseDelta.x * 0.01f * this->MouseSpeedX;
        this->Target += Up * io.MouseDelta.y * 0.01f * this->MouseSpeedY;
        ShouldRecalculate=true;
    }
   
    if(io.MouseWheel != 0)
    {
        float Offset = io.MouseWheel * 0.1f * this->MouseSpeedWheel * this->Distance;
        this->Distance -= Offset;
        if(Distance < 0.1f) Distance = 0.1f;
        ShouldRecalculate=true;
    }


 
    if(ShouldRecalculate)
    {
        Recalculate();
        return true;
    }
    return false;

}
 This function will be called every frame. We use ImGui IO object to get the mouse delta values and the mouse button states.
We keep track of a phi and theta variables that represent spherical coordinates in the scene, and update them based on mouse movement.
We also allow to move forward by using shift + mouse move or using the wheel, and to shift the camera position by using right click.

Recalculate() then calculates the model matrix for the camera that we will use in the path tracing code.
This Update() function returns true if any input changed during this frame, false otherwise.
 
Let's add this field into our App, and start using it.
in the while loop, we call the update function, and set the scene camera matrix to the controller model matrix :
ResetRender |= Controller.Update();
Scene->Cameras[0].Frame = Controller.ModelMatrix;
 
Notice that we're now using a ResetRender boolean variable. This will allow us to restart the render whenever we move the camera
 We also now need to update the camera gpu buffer every frame :
Scene->CamerasBuffer->updateData(Scene->GpuData.CamInx * sizeof(camera), 
                                 Scene->Cameras.data(), Scene->Cameras.size() * sizeof(camera));
 
We'll also reset the currentSample count to 0 if ResetRender is true, at the start of every frame.
if(ResetRender)
{    
    Params.CurrentSample=0;
}
 
 And we're done with the camera controller.


Let's also start doing a better job at generating rays. We've been using a very simplified camera model, but now is a good time to make it a bit more complex !


Before, we were just assuming that the camera plane was 1 unit away from the camera, but now we use a camera lens parameter that allows to shrink/grow the field of view.
We also use a Film parameter in the camera to set the size of the film plane, that gets modulated by the aspect ratio of the camera.
 
Before, the origin of the ray was always 0,0,0, but now we're using a random number to shift this position, which will allow to create a depth of field effect.

Here's the new GetRay code :
FN_DECL ray GetRay( vec2 ImageUV, vec2 LensUV)
{
    camera Camera = Cameras[0];

    vec2 Film = Camera.Aspect >= 1 ?
               vec2(Camera.Film, Camera.Film / Camera.Aspect):
               vec2(Camera.Film * Camera.Aspect, Camera.Film);
   
    // Point on the film
    vec3 Q = vec3(
        Film.x * (0.5f - ImageUV.x),
        Film.y * (0.5f - ImageUV.y),
        Camera.Lens
    );
    vec3 RayDirection = -normalize(Q);
    vec3 PointOnFocusPlane = RayDirection * Camera.Focus / abs(RayDirection.z);
   
    // Jitter the point on the lens
    vec3 PointOnLens = vec3 (LensUV.x * Camera.Aperture / 2, LensUV.y * Camera.Aperture / 2, 0);

   
    vec3 FinalDirection =normalize(PointOnFocusPlane - PointOnLens);

    //Transform the ray direction and origin
    ray Ray = MakeRay(
        TransformPoint(Camera.Frame, PointOnLens),
        TransformDirection(Camera.Frame, FinalDirection),
        vec3(0)
    );
    return Ray;
}
 
 Here's a little drawing of the different quantities, with a top-down view of a camera :
 

Visual Improvements

Now, let's just do a few quick things that will improve our visual quality :
 

Russian roulette

This is a very simple trick to get out early of the ray bounces recursion.
The way it works is the following : 
past an arbitrary amount of bounces (usually 3), we want to know if the current path is going to bring something significant to the final image. If it doesn't we may as well end it early.
To do that, we use the current Weight value, and compare it with a random number.
If the random number is higher than the value, we break out of the loop. It makes sense because if the weight is low, it's likely not going to bring anything relevant to the image, and the random number will likely be higher than it so we can break out.
We then divide the current weight by its maximum value, which will improve the chance of getting caught in the russian roulette the next time around for low values : 
 
                if(Weight == vec3(0,0,0) || !IsFinite(Weight)) break;

                if(Bounce > 3)
                {
                    float RussianRouletteProb = min(0.99f, max3(Weight));
                    if(RandomUnilateral(Isect.RandomState) >= RussianRouletteProb) break;
                    Weight *= 1.0f / RussianRouletteProb;
                }    
 
 

 NaN management

If we're not careful, some NaN values might spread up into the image, rendering it all black. This can especially be the case if the PDF of a sample is 0, which will induce a division by 0.
To prevent that, we can just add a little check and reset the radiance value if it's nan once we finished tracing :
if(!IsFinite(Radiance)) Radiance = vec3(0,0,0);
 

Radiance Clamping

We also want to clamp radiance that have too big values to prevent fireflies :
 
if(max3(Radiance) > 10) Radiance = Radiance * (10 / max3(Radiance));
 

Tonemapping

At the moment, we're just outputing the ray colour buffer out from the path tracing kernel to the screen.
What we can do is apply some image processing to improve the colours of the image, and also to fit them into the standard 8 bytes RGB values.

We will be doing that in another kernel after the path tracing one is finished.
 
First, let's add some new textures and kernels to App.h : 
#if API==API_GL
    std::shared_ptr<shaderGL> PathTracingShader;
    std::shared_ptr<shaderGL> TonemapShader;

    std::shared_ptr<textureGL> RenderTexture;
    std::shared_ptr<textureGL> TonemapTexture;
    std::shared_ptr<uniformBufferGL> TracingParamsBuffer;
    std::shared_ptr<bufferGL> MaterialBuffer;
#elif API==API_CU
    std::shared_ptr<bufferCu> TracingParamsBuffer;
    std::shared_ptr<bufferCu> RenderBuffer;
    std::shared_ptr<bufferCu> TonemapBuffer;    
    std::shared_ptr<textureGL> RenderTexture;
    std::shared_ptr<cudaTextureMapping> RenderTextureMapping;
    std::shared_ptr<bufferCu> MaterialBuffer;

#endif
 
Next, we create those new objects in the InitGpuObjects function : 
#if API==API_GL
    PathTracingShader = std::make_shared<shaderGL>("resources/shaders/PathTrace.glsl");
    TonemapShader = std::make_shared<shaderGL>("resources/shaders/Tonemap.glsl");
    RenderTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);
    TonemapTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);    
    TracingParamsBuffer = std::make_shared<uniformBufferGL>(sizeof(tracingParameters), &Params);
    MaterialBuffer = std::make_shared<bufferGL>(sizeof(material) * Scene->Materials.size(), Scene->Materials.data());
#elif API==API_CU
    RenderTexture = std::make_shared<textureGL>(Window->Width, Window->Height, 4);
    RenderBuffer = std::make_shared<bufferCu>(Window->Width * Window->Height * 4 * sizeof(float));
    TonemapBuffer = std::make_shared<bufferCu>(Window->Width * Window->Height * 4 * sizeof(float));
    RenderTextureMapping = CreateMapping(RenderTexture);    
    TracingParamsBuffer = std::make_shared<bufferCu>(sizeof(tracingParameters), &Params);
    MaterialBuffer = std::make_shared<bufferCu>(sizeof(material) * Scene->Materials.size(), Scene->Materials.data());
#endif
 
Then in the Trace() function, we execute those kernels at the end : 
#if API==API_GL
    TonemapShader->Use();
    TonemapShader->SetTexture(0, RenderTexture->TextureID, GL_READ_WRITE);
    TonemapShader->SetTexture(1, TonemapTexture->TextureID, GL_READ_WRITE);
    TonemapShader->Dispatch(Window->Width / 16 + 1, Window->Height / 16 + 1, 1);
#elif API==API_CU
    dim3 blockSize(16, 16);
    dim3 gridSize((Window->Width / blockSize.x)+1, (Window->Height / blockSize.y) + 1);
    TonemapKernel<<<gridSize, blockSize>>>((glm::vec4*)RenderBuffer->Data, (glm::vec4*)TonemapBuffer->Data, Window->Width, Window->Height);
    cudaMemcpyToArray(RenderTextureMapping->CudaTextureArray, 0, 0, TonemapBuffer->Data, Window->Width * Window->Height * sizeof(glm::vec4), cudaMemcpyDeviceToDevice);
#endif
 
Here we'll be using different code for cuda and openGL because I'm lazy, and this code is not going to change much if at all in the future so it's not too much of a problem.
 
I'll only show the glsl code, but cuda's will be exactly the same :
 
float ToSRGB(float Col) {
  return (Col <= 0.0031308f) ? 12.92f * Col
                             : (1 + 0.055f) * pow(Col, 1 / 2.4f) - 0.055f;
}

vec3 ToSRGB(vec3 Col)
{
    return vec3(
        ToSRGB(Col.x),
        ToSRGB(Col.y),
        ToSRGB(Col.z)
    );
}

void main() {
    uvec2 GlobalID = gl_GlobalInvocationID.xy;
    vec3 Col = ToSRGB(imageLoad(inputImage, ivec2(GlobalID)).xyz);
    imageStore(outputImage, ivec2(GlobalID), vec4(Col, 1));
}
 
We simply load the colour from the input image, pass it to ToSRGB() function, and store the result in the output image.
ToSRGB converts the input image to the sRGB colour space, as opposed to the linear colour space that we're using in our calculations in the path tracing code.
the sRGB colour space is the standard colour space used in most imaging applications and on devices like displays, cameras etc. 
This colour space matches the charasteristics of CRT screens that have a non-linear response curve to colour intensities. It basically compresses the high dynamic range (HDR) of linear RGB values to a low dynamic range (LDR) that is suitable for 8 bit colour displays. You can read more about it here.
 

Conclusion

And this is the end result :
 
 
It's really starting to look good! We've come a long way from the start of this series, but we still have many exciting things to add.
 
This Cornell Box is cool, but I really want to see something else now. So next, we will be importing meshes from files and render them in our path tracer. 

Commentaires

Articles les plus consultés