- Simple GPU Path Tracing : Introduction
- Simple GPU Path Tracing, Part. 1 : Project Setup
- Simple GPU Path Tracing, Part. 1.1 : Adding a cuda backend to the project
- Simple GPU Path Tracing, Part. 2.0 : Scene Representation
- Simple GPU Path Tracing, Part. 2.1 : Acceleration structure
- Simple GPU Path Tracing, Part. 3.0 : Path Tracing Basics
- Simple GPU Path Tracing, Part. 3.1 : Matte Material
- Simple GPU Path Tracing, Part. 3.2 : Physically Based Material
- Simple GPU Path Tracing, Part. 3.4 : Small Improvements, Camera and wrap up
- Simple GPU Path Tracing, Part. 4.0 : Mesh Loading
- Simple GPU Path Tracing, Part. 4.1 : Textures
- Simple GPU Path Tracing, Part. 4.2 : Normal Mapping & GLTF Textures
- Simple GPU Path Tracing, Part. 5.0 : Sampling lights
- Simple GPU Path Tracing, Part 6 : GUI
- Simple GPU Path Tracing, Part 7.0 : Transparency
- Simple GPU Path Tracing, Part 7.1 : Volumetric materials
- Simple GPU Path Tracing, Part 7.2 : Refractive material
- Simple GPU Path Tracing, Part 8 : Denoising
- Simple GPU Path Tracing, Part 9 : Environment Lighting
- Simple GPU Path Tracing, Part 10 : Little Optimizations
- Simple GPU Path Tracing, Part 11 : Multiple Importance Sampling
In the previous post, we saw how to use textures in the path tracer.
Here are the 2 commits for this post
Normal Mapping
So now, we will be implementing normal mapping technique to get some better surface details on the shapes. I won't go into too much detail on that technique, but you can read about it in this great tutorial.
We already have a field for NormalTexture in material, so we can set that for the floor :
FloorMaterial.NormalTexture = 1;
In normal mapping, we need a way of calculating the tangent space of the surface that we're shading. That's because the normals that are stored in a normal map are always pointing to the +z direction (hence the blue look they have), but the shaded surfaces often have arbitrary directions, so we have to map the direction of the normal map to the normal of the surface, and we do that using a tangent space transform.
so we'll add a Tangent and Bitangent fields in the sceneIntersection struct :
vec3 Tangent;
vec3 Bitangent;
Then we can calculate those fields when we hit a point in the main loop :
vec3 OutgoingDir = -Ray.Direction;
vec3 Normal = EvalShadingNormal(OutgoingDir, Isect);
vec3 Position = TransformPoint(Isect.InstanceTransform, Tri.v1 * Isect.U +
Tri.v2 * Isect.V + Tri.v0 * (1 - Isect.U - Isect.V));
Isect.Normal = Normal;
vec4 Tangent = ExtraData.Tangent1 * Isect.U + ExtraData.Tangent2 * Isect.V +
ExtraData.Tangent0 * (1 - Isect.U - Isect.V);
Isect.Tangent = TransformDirection(NormalTransform, vec3(Tangent));
Isect.Bitangent = TransformDirection(NormalTransform, normalize(cross(Isect.Normal,
vec3(Tangent)) * Tangent.w));
We already have the tangent from the model, and we calculate the bitangent by taking the cross product between the normal and the tangent.
And now, we'll have to use it in the pathTracingCode. At the moment in the path tracing loop, this is how we get the normal of the hit surface :
vec3 Normal = TransformDirection(NormalTransform, ExtraData.Normal1 * Isect.U +
ExtraData.Normal2 * Isect.V +
ExtraData.Normal0 * (1 - Isect.U - Isect.V));
We will change this line to this :
vec3 Normal = EvalShadingNormal(OutgoingDir, Isect);
And here's the EvalShadingNormal function :
FN_DECL vec3 EvalShadingNormal(INOUT(vec3) OutgoingDir, INOUT(sceneIntersection) Isect)
{
vec3 Normal = EvalNormalMap(Isect.Normal, Isect);
return dot(Normal, OutgoingDir) >= 0 ? Normal : -Normal;
}
It's calling EvalNormalMap, which looks like this :
FN_DECL vec3 EvalNormalMap(vec3 Normal, INOUT(sceneIntersection) Isect)
{
vec2 UV = EvalTexCoord(Isect);
if(Materials[Isect.MaterialIndex].NormalTexture != INVALID_ID)
{
vec3 NormalTex = vec3(2) * vec3(EvalTexture(Materials[Isect.MaterialIndex].NormalTexture, UV, false)) - vec3(1);
mat3 TBN = GetTBN(Isect, Normal);
Normal = TBN * normalize(NormalTex);
return Normal;
}
return Normal;
}
Here, we evaluate the texture, we transform the sampled normal from [0, 1] to [-1, 1], and transform it to a world space normal using the tangent space matrix TBN.
GetTBN() simply makes a 3x3 matrix from the tangent, bitangent and normal fields :
FN_DECL mat3 GetTBN(INOUT(sceneIntersection) Isect, vec3 Normal)
{
return mat3(Isect.Tangent, Isect.Bitangent, Normal);
}
And there we go, we have normal mapped floor :
GLTF Textures
To wrap up texturing, I just want to load the textures from gltf models, and we will be done.
Let's add a LoadTextures() function in GLTFLoader :
void LoadTextures(tinygltf::Model &GLTFModel, std::shared_ptr<scene> Scene)
{
std::vector<texture> &Textures = Scene->Textures;
std::vector<std::string> &TextureNames = Scene->TextureNames;
uint32_t BaseIndex = Textures.size();
Textures.resize(Textures.size() + GLTFModel.textures.size());
TextureNames.resize(TextureNames.size() + GLTFModel.textures.size());
for (size_t i = 0; i < GLTFModel.textures.size(); i++)
{
tinygltf::Texture& GLTFTex = GLTFModel.textures[i];
tinygltf::Image GLTFImage = GLTFModel.images[GLTFTex.source];
std::string TexName = GLTFTex.name;
if(strcmp(GLTFTex.name.c_str(), "") == 0)
{
TexName = GLTFImage.uri;
}
assert(GLTFImage.component==4);
assert(GLTFImage.bits==8);
texture &Texture = Textures[BaseIndex + i];
Texture.NumChannels = GLTFImage.component;
Texture.Width = Scene->TextureWidth;
Texture.Height = Scene->TextureHeight;
// If the target size is not the scene texture size, resize the image
if(Scene->TextureWidth != GLTFImage.width || Scene->TextureHeight != GLTFImage.height)
{
// Resize the image using stbir_resize (part of stb_image_resize.h)
stbi_uc* ResizedImage = new stbi_uc[Scene->TextureWidth * Scene->TextureHeight * 4]; // Assuming RGBA format
int result = stbir_resize_uint8(GLTFImage.image.data(), GLTFImage.width, GLTFImage.height, 0, ResizedImage, Scene->TextureWidth, Scene->TextureHeight, 0, 4);
if (!result) {
// Handle resize error
std::cout << "Failed to resize GLTF image" << std::endl;
delete[] ResizedImage;
return;
}
// Resize the pixel data, and copy to it
Texture.Pixels.resize(Scene->TextureWidth * Scene->TextureHeight * 4);
memcpy(Texture.Pixels.data(), ResizedImage, Texture.Pixels.size());
delete[] ResizedImage;
}
else
{
Texture.Pixels.resize(GLTFImage.image.size());
memcpy(Texture.Pixels.data(), GLTFImage.image.data(), GLTFImage.image.size());
}
}
}
Tinygltf reads the texture files for us, so we just need to get the data from the image objects.
Note that we still have to resize the textures if they're not the same size as the required size by the scene.
In LoadMaterials, we can now link the materials with the textures
Material.ColourTexture = BaseTextureIndex + PBR.baseColorTexture.index;
Material.RoughnessTexture = BaseTextureIndex + PBR.metallicRoughnessTexture.index;
Material.NormalTexture = BaseTextureIndex + GLTFMaterial.normalTexture.index;
Material.EmissionTexture = BaseTextureIndex + GLTFMaterial.emissiveTexture.index;
Last thing we need to do is read the texture coordinates and the tangents from the primitives
//Tangents
tinygltf::Accessor TangentAccessor;
tinygltf::BufferView TangentBufferView;
const uint8_t *TangentBufferAddress=0;
int TangentStride=0;
if(TangentIndex >= 0)
{
TangentAccessor = GLTFModel.accessors[TangentIndex];
TangentBufferView = GLTFModel.bufferViews[TangentAccessor.bufferView];
const tinygltf::Buffer &TangentBuffer = GLTFModel.buffers[TangentBufferView.buffer];
TangentBufferAddress = TangentBuffer.data.data();
//3 * float
TangentStride = tinygltf::GetComponentSizeInBytes(TangentAccessor.componentType) * tinygltf::GetNumComponentsInType(TangentAccessor.type);
if(TangentBufferView.byteStride > 0) TangentStride =(int) TangentBufferView.byteStride;
}
//UV
tinygltf::Accessor UVAccessor;
tinygltf::BufferView UVBufferView;
const uint8_t *UVBufferAddress=0;
int UVStride=0;
if(UVIndex >= 0)
{
UVAccessor = GLTFModel.accessors[UVIndex];
UVBufferView = GLTFModel.bufferViews[UVAccessor.bufferView];
const tinygltf::Buffer &uvBuffer = GLTFModel.buffers[UVBufferView.buffer];
UVBufferAddress = uvBuffer.data.data();
//2 * float
UVStride = tinygltf::GetComponentSizeInBytes(UVAccessor.componentType) * tinygltf::GetNumComponentsInType(UVAccessor.type);
if(UVBufferView.byteStride > 0) UVStride = (int)UVBufferView.byteStride;
}
......
glm::vec4 Tangent;
if(TangentIndex>=0)
{
const uint8_t *address = TangentBufferAddress + TangentBufferView.byteOffset + TangentAccessor.byteOffset + (k * TangentStride);
memcpy(&Tangent, address, 16);
Shape.Tangents[k] = Tangent;
}
glm::vec2 UV;
if(UVIndex>=0)
{
const uint8_t *address = UVBufferAddress + UVBufferView.byteOffset + UVAccessor.byteOffset + (k * UVStride);
memcpy(&UV, address, 8);
Shape.TexCoords[k] = UV;
}
And that's all. Now the materials will come with all the textures loaded, and here's our car with textures on :
Links
Next Post : Simple GPU Path Tracing, Part. 5.0 : Sampling lights
Commentaires
Enregistrer un commentaire