Simple GPU Path Tracing, Part 9 : Environment Lighting

 

So far, we've been using only shapes as light emitters, but what can greatly improve renders is having an environment that's lighting our scene.


 

Here are the commits for this post

The way it works is when the ray misses the scene, i.e it's not hitting anything, it means that it will be going to hit the environment, which may or may not return some radiance, depending on the direction.


The most simple way of doing that is by just adding to the radiance whenever the ray misses : 


                IntersectTLAS(Ray, Isect);
                if(Isect.Distance == 1e30f)
                {
                    Radiance += vec3(1,1,1);
                    break;
                }

Doing that means that the environment is composed of a uniform light of value 1,1,1 from all directions : 

 

That's cool, but not super realistic. What would be nice is being able to use a texture as an environment, like an equirectangular HDR texture, something like that : 

360 Degree Nature Environment Texture Background. Landscape HDRI Map.  Equirectangular Projection, Spherical Panorama. 3d Rendering Stock Photo,  Picture and Royalty Free Image. Image 168530584. 

So that when the ray misses and hits the ground on the texture, not much radiance comes in, but when it misses and hits the sun, then lots of radiance is emitted.

So that's what we'll be doing in this post, step by step as always.


Code Setup

So let's first do some basic setup for environments : 
we'll create a struct for storing environment information, and add a vector of that struct into our scene struct : 

struct environment
{
    glm::vec3 Emission;
    float pad0;

    glm::ivec3 pad1;
    int EmissionTexture = InvalidID;
};

we can then add an environment into our CreateCornellBox() function :
    Scene->Environments.emplace_back();
    Scene->EnvironmentNames.push_back("Sky");
    environment &Sky = Scene->Environments.back();
    Sky.Emission = {2,2,2};


and then we'll add an environment gpu buffer to the scene struct : 

#if API==API_GL
    std::shared_ptr<bufferGL> CamerasBuffer;
    std::shared_ptr<bufferGL> EnvironmentsBuffer;
    std::shared_ptr<textureArrayGL> TexArray;
#elif API==API_CU
    std::shared_ptr<bufferCu> EnvironmentsBuffer;
    std::shared_ptr<bufferCu> CamerasBuffer;
    std::shared_ptr<textureArrayCu> TexArray;
#endif

and then, we'll create this buffer after filling our scene :


#if API==API_GL
    Scene->CamerasBuffer = std::make_shared<bufferGL>(Scene->Cameras.size() 
                 * sizeof(camera), Scene->Cameras.data());
    Scene->EnvironmentsBuffer = std::make_shared<bufferGL>(Scene->Environments.size()  
                 * sizeof(camera), Scene->Environments.data());
#elif API==API_CU
    Scene->CamerasBuffer = std::make_shared<bufferCu>(Scene->Cameras.size()  
                 * sizeof(camera), Scene->Cameras.data());
    Scene->EnvironmentsBuffer = std::make_shared<bufferCu>(Scene->Environments.size()  
                 * sizeof(camera), Scene->Environments.data());
#endif    

Cool, so now we have environment data that we can access from our gpu path tracer. 
We also need set the environments count, and for that we'll just use a uniform variable for openGL, and an int argument for cuda.
Let's pass those buffers into the kernels :
        PathTracingShader->SetSSBO(Scene->EnvironmentsBuffer, 11);
        PathTracingShader->SetInt("EnvironmentsCount", Scene->Environments.size()); 

And for cuda, same as usual, we pass the buffer into the arguments of the kernel, and we declare the environments into the global scope as well.


Let's add some functions in our PathTraceCode to evaluate an environment for a given direction : 
FN_DECL vec3 EvalEnvironment(INOUT(environment) Env, vec3 Direction)
{
    return Env.Emission;
}

FN_DECL vec3 EvalEnvironment(vec3 Direction)
{
    vec3 Emission = vec3(0,0,0);
    for(int i=0; i< EnvironmentsCount; i++)
    {
        Emission += EvalEnvironment(Environments[i], Direction);
    }
    return Emission;
}

And when the ray misses, we call the EvalEnvironment function : 

                IntersectTLAS(Ray, Isect);
                if(Isect.Distance == 1e30f)
                {
                    Radiance += Weight * EvalEnvironment(Ray.Direction);
                }

So we now have the same result as before, except the environment data is now coming from gpu buffers.


Textured environments

Now we have this basic setup, let's add textures to our environments !
The environment textures we'll be using are HDR textures, meaning their pixel values are not ranging from 0-255, but are stored as float values. This allows for environments that have more accurate pixel values and therefore better radiance values.

So the first thing we need to do is change our texture struct to also store float values, by just adding a vector of float as member : 
    std::vector<float> PixelsF = {};
 
and inside of the scene struct, we'll add an array of environment textures that we can then reference from our environment struct : 
std::vector<texture> EnvTextures = {};
 
Now, we can add an environment texture to the scene like that : 
    Scene->EnvTextures.emplace_back();
    texture &SkyTex = Scene->EnvTextures.back();
    SkyTex.SetFromFile("resources/textures/Sky.hdr", Scene->EnvTextureWidth, Scene->EnvTextureHeight);
    Scene->EnvTextureNames.push_back("Sky");    
 
and use it in an environment : 
    Scene->Environments.emplace_back();
    Scene->EnvironmentNames.push_back("Sky");
    environment &Sky = Scene->Environments.back();
    Sky.Emission = {2,2,2};
    Sky.EmissionTexture = 0;
 
 
But here, SetFromFile has changed a little : 
void texture::SetFromFile(const std::string &FileName, int Width, int Height)
{
    if(IsHDR(FileName))
    {
        int NumChannels=4;
        ImageFromFile(FileName, this->PixelsF, Width, Height, NumChannels);
        this->NumChannels = this->Pixels.size() / (Width * Height);
        this->Width = Width;
        this->Height = Height;
    }
    else
    {
        int NumChannels=4;
        ImageFromFile(FileName, this->Pixels, Width, Height, NumChannels);
        this->NumChannels = this->Pixels.size() / (Width * Height);
        this->Width = Width;
        this->Height = Height;
    }
}
 We first check wether the file is an hdr file, and call ImageFromFile with PixelsF or Pixels accordingly.
The ImageFromFile function for hdr images is exactly the same as the uint8 one, except it's calling stbi_loadf instead of stbi_load.

So now, we can load hdr files, great !
we also need to store them on the gpu so they're available on the gpu. To do that, we will use another texture array for environment textures : 
#if API==API_GL
    std::shared_ptr<bufferGL> CamerasBuffer;
    std::shared_ptr<bufferGL> EnvironmentsBuffer;
    std::shared_ptr<textureArrayGL> TexArray;
    std::shared_ptr<textureArrayGL> EnvTexArray;
#elif API==API_CU
    std::shared_ptr<bufferCu> EnvironmentsBuffer;
    std::shared_ptr<bufferCu> CamerasBuffer;
    std::shared_ptr<textureArrayCu> TexArray;
    std::shared_ptr<textureArrayCu> EnvTexArray;
#endif    
 
and in the ReloadTextureArray() function, we create this array and fill it with the data :
    EnvTexArray->CreateTextureArray(EnvTextureWidth, EnvTextureHeight, EnvTextures.size(), true);
    for (size_t i = 0; i < EnvTextures.size(); i++)
    {
        EnvTexArray->LoadTextureLayer(i, EnvTextures[i].PixelsF, EnvTextureWidth, EnvTextureHeight);
    }
 
Note that we've added an argument to the CreateTextureArray which is IsFloat. If true, then it's going to create a texture with float pixel values, otherwise it'll be uint8.
 
So now that our environment textures are created, we also need to pass them to the kernels, as usual : 
        PathTracingShader->SetTextureArray(Scene->EnvTexArray, 14, "EnvTextures");
 
 Now, in our pathTraceCode, we need to sample the texture in EvalEnvironment. 
FN_DECL vec3 EvalEnvironment(INOUT(environment) Env, vec3 Direction)
{
    vec2 TexCoord = vec2(
        atan(Direction.x, Direction.z) / (2 * PI_F),
        acos(clamp(Direction.y, -1.0f, 1.0f)) / PI_F
    );
    if(TexCoord.x < 0) TexCoord.x += 1.0f;

    return Env.Emission * vec3(EvalEnvTexture(Env.EmissionTexture, TexCoord, false));
   
}
 
To do that, we map the direction vector to texture coordinates. 
What we're essentially doing is transforming a direction from cartesian space to spherical space, getting a phi and theta angle with those formulas (see this link for more explanations)
  

where r is 1 because the vector is normalized, and z and y are inverted in case because we're using y up and z forward coordinates.
 
And here is the result : 

 

Sampling environments

Now that we have environments emitting light, we can also include them in our importance sampling algorithm. Remember that we keep track of all emitting objects in the scene, and sample them when generating new ray directions. Well, environments are no exception, and we will apply the same process to them.
 
First, we'll make some changes to how we store lights and how we send them to the gpu. We were storing the CDF in the light struct as a static-sized array, which was okay because we would only use small shapes as emitters. 
But for emissive environment maps, the CDF will be the same size as the environment texture, and as static sized arrays are allocated on the stack, this will cause stack overflow.
So we will change the light structs as follow : 
struct light
{
    int Instance = -1;
    int CDFCount = 0;
    int Environment = -1;
    int CDFStart = 0;
};

struct lights
{
    std::vector<light> Lights;
    std::vector<float> LightsCDF;
};
 
The lights struct contains an array of lights, and we will store all the CDFs of all lights in the scene in a big float buffer. 
We then have 2 gpu buffers, one for the lights, and one for the CDFs : 
std::shared_ptr<bufferGL> LightsBuffer;
std::shared_ptr<bufferGL> LightsCDFBuffer;   
 
LightsBuffer = std::make_shared<bufferGL>(sizeof(light) * Lights.Lights.size(), Lights.Lights.data());
LightsCDFBuffer = std::make_shared<bufferGL>(sizeof(float) * Lights.LightsCDF.size(), Lights.LightsCDF.data());
 
We also have to change the way we pass the lights to the gpu, but I'm not going to show the code as it's quite straightforward, we just pass the 2 buffers instead of a single one before.
 
We then change our GetLights() function to take this into account : 
 
    for(size_t i=0; i<Scene->Environments.size(); i++)
    {
        const environment &Environment = Scene->Environments[i];
        if(Environment.Emission == glm::vec3{0,0,0}) continue;

        light &Light = AddLight(Lights);
        Light.Instance = InvalidID;
        Light.Environment = (int)i;
        if(Environment.EmissionTexture != InvalidID)
        {
            texture& Texture = Scene->EnvTextures[Environment.EmissionTexture];
            Light.CDFCount = Texture.Width * Texture.Height;
            Light.CDFStart = Lights.LightsCDF.size();
            Lights.LightsCDF.resize(Lights.LightsCDF.size() + Light.CDFCount);
           
            for (size_t i=0; i<Light.CDFCount; i++) {
                glm::ivec2 IJ((int)i % Texture.Width, (int)i / Texture.Width);
                float Theta    = (IJ.y + 0.5f) * PI_F / Texture.Height;
                glm::vec4 Value = Texture.SampleF(IJ);
                Lights.LightsCDF[Light.CDFStart + i] = MaxElem(Value) * sin(Theta);
                if (i != 0) Lights.LightsCDF[Light.CDFStart + i] += Lights.LightsCDF[Light.CDFStart + i - 1];
            }
        }

    }  
 
Remember, the CDF is the accumulation of the probabilities of each event.
So what we will do is loop through all the pixels, calculate the PDF of that pixel being sampled, and accumulate.
We want the pixels that have high intensities to be more likely to be sampled, so their pdf will be higher, which is why we take the maximum element of the pixel as the base for our pdf calculation.
We then multiply with the sine of the angle. 
The reason why we have to do this multiplication is due to how we map the equirectangular map to a sphere, and how the solid angle changes over the hemisphere.
The solid angle at the zenith (Near the surface normal) is smaller than the solid angle at the horizon (as the angle approaches 90 degrees) : 
 
So if we generate uniformly random numbers to sample for a theta value, they will be more packed towards the zenith because the solid angle is smaller.

A more intuitive explanation is if we consider this image, and imagine we put one sample inside each square : 
 
When we then map that equirectangular image to spherical coordinates, we get this projection :
 
And it becomes quite clear that all the samples on the top row would be concentrated on a very small area.

If we consider an environment map with constant values, this would lead to more samples generated towards the zenith. To correct this distortion, we multiply by sin(theta), which will give more importance to directions towards the horizon, hence producing a more uniform distribution.
You can read more about this here.
 
 
Ok so now we're ready to sample our environment in our path tracing code.
The 2 functions we have to edit are SampleLights and SampleLightsPDF.
to SampleLights, we'll just add this bit of code : 
    else if(Lights[LightID].Environment != INVALID_ID)
    {
        environment Env = Environments[Lights[LightID].Environment];
        if (Env.EmissionTexture != INVALID_ID) {
            // auto& emission_tex = scene.textures[environment.emission_tex];
            int SampleInx = SampleDiscrete(LightID, RandEl);
            vec2 UV = vec2(((SampleInx % EnvTexturesWidth) + 0.5f) / EnvTexturesWidth,
                ((SampleInx / EnvTexturesWidth) + 0.5f) / EnvTexturesHeight);
           
            return TransformDirection(Env.Transform, vec3(cos(UV.x * 2 * PI_F) * sin(UV.y * PI_F),
                        cos(UV.y * PI_F),
                        sin(UV.x * 2 * PI_F) * sin(UV.y * PI_F)));
        } else {
            return SampleSphere(RandUV);
        }      
    }

Here, we take sample a pixel from the environment map using the SampleDiscrete function that we were already using for sampling triangles from an emissive shape.
We then transform that pixel sample to normalized coordinates between 0 and 1, that correspond to spherical coordinates phi and theta (For the x and y pixel position), that we can then use to get a cartesian coordinate using the usual Spherical -> Cartesian conversion.
Those cartesian coordinates represent the direction vector to the sampled pixel in the environment map. We can then transform this vector with the environment map transform matrix, effectively allowing us to rotate the environment map.

I also just wanted to mention that the SampleDiscrete function has changed as well : 

FN_DECL int SampleDiscrete(int LightInx, float R)
{
    //Remap R from 0 to the size of the distribution
    int CDFStart = Lights[LightInx].CDFStart;
    int CDFCount = Lights[LightInx].CDFCount;

    float LastValue = LightsCDF[CDFStart + CDFCount-1];

    R = clamp(R * LastValue, 0.0f, LastValue - 0.00001f);
    // Returns the first element in the array that's greater than R.#
    int Inx= UpperBound(CDFStart, CDFCount, R);
    return clamp(Inx, 0, CDFCount-1);
}

We now read the CDF values from the big CDF buffer, and use the CDFStart and CDFCount to iterate in the buffer.
We also now call UpperBound() function which searches for the first value in the buffer that's greater than the given value. Before we were just iterating through the buffer sequencially until the searched value was found, but now we do it with a binary search which is much faster. We can do that because we know CDF buffers are always increasing.

Ok, now on to the SampleLightsPDF,to which we added this part : 
        else if(Lights[i].Environment != INVALID_ID)
        {
            environment Env = Environments[Lights[i].Environment];
            if (Env.EmissionTexture != INVALID_ID) {
                vec3 WorldDir = TransformDirection(inverse(Env.Transform), Direction);

                vec2 TexCoord = vec2(atan2(WorldDir.z, WorldDir.x) / (2 * PI_F),
                                     acos(clamp(WorldDir.y, -1.0f, 1.0f)) / PI_F);
                if (TexCoord.x < 0) TexCoord.x += 1;
               
                int u = clamp(
                    (int)(TexCoord.x * EnvTexturesWidth), 0, EnvTexturesWidth - 1);
                int v    = clamp((int)(TexCoord.y * EnvTexturesHeight), 0,
                    EnvTexturesHeight - 1);
                float Probability = SampleDiscretePDF(
                                Lights[i].CDFStart, Lights[i].CDFCount, v * EnvTexturesWidth + u) /
                            LightsCDF[Lights[i].CDFStart + Lights[i].CDFCount -1];
                float Angle = (2 * PI_F / EnvTexturesWidth) *
                            (PI_F / EnvTexturesHeight) *
                            sin(PI_F * (v + 0.5f) / EnvTexturesHeight);
                PDF += Probability / Angle;
            } else {
                PDF += 1 / (4 * PI_F);
            }            
        }
 We transform the sampled direction to UV coordinates on the texture, by going back to spherical coordinates from cartesian. 
Then, we sample the pdf for that direction with SampleDiscretePDF which I'll discuss later.
We then calculate the sin(theta) angle of the sampled direction, and divide the probability by it, because to calculate the pdf, we're evaluating the inverse of the CDF which was multiplying the MaxElem(value) by sin(theta).
 
To get the pdf value, we call SampleDiscretePDF that gets the individual value for a given index : 

FN_DECL float SampleDiscretePDF(int CDFStart, int CDFCount, int Inx) {
  if (Inx == 0) return LightsCDF[CDFStart];
  return LightsCDF[CDFStart + Inx]- LightsCDF[CDFStart + Inx - 1];
}

We do that because as you know by now, the CDF is the accumulation of PDF values. So if we remove the previous value from the current value, we get an individual PDF value.

Annnd we're done here !
Here's a little capture of the result : 

Commentaires

Articles les plus consultés