The previous Code already provides the base of our ray tracer.
In this post I will provide 3 examples on how can we extend this functionality.
To recap, we first identify the intersection between the line segment and the sphere (Moved to a function). If the intersection lies within the drawing box (depth between 0 and 1), then we consider that the sphere has been hit.
Then, the color component from each light source is calculated (using the intersection normal, and the light color)
Finally the diffuse component from each light are added up, and the final color is the contribution of both lights
Calculating which object intersects a ray is one of the heaviest steps in ray tracing, and proper space subdivision techniques are required to achieve performance.
Using the depth from the intersection, we consider that the smallest value represents the intersection closest to the screen:
The final pixel color is the diffuse component of the closest object to the camera:
In the end result, lighter means closest to the camera, and darker means farther.
In this post I will provide 3 examples on how can we extend this functionality.
Ray trace with two light sources
The following snippet describes how two light sources will affect how the sphere is renderedVec3 intersect, normal; double depth = IntersectSphere(p0, p1, s, intersect, normal); if (depth < 1) { Vec3 finalColor1 = l1.CalculateDiffuse(intersect, normal, s.color); Vec3 finalColor2 = l2.CalculateDiffuse(intersect, normal, s.color); Vec3 finalColor(finalColor1.x + finalColor2.x, finalColor1.y + finalColor2.y, finalColor1.z + finalColor2.z); finalColor.Clamp(0.f, 1.f); DrawPixel(x, y, finalColor); } else { DrawPixel(x, y, s_bgColor); }
To recap, we first identify the intersection between the line segment and the sphere (Moved to a function). If the intersection lies within the drawing box (depth between 0 and 1), then we consider that the sphere has been hit.
Then, the color component from each light source is calculated (using the intersection normal, and the light color)
Finally the diffuse component from each light are added up, and the final color is the contribution of both lights
Ray trace two Spheres
Ray tracing two has an increased level of complexity. In this case, it's needed to calculate which one is the first sphere hit by the ray; that is, the closest intersection to the screen.Calculating which object intersects a ray is one of the heaviest steps in ray tracing, and proper space subdivision techniques are required to achieve performance.
Using the depth from the intersection, we consider that the smallest value represents the intersection closest to the screen:
Vec3 intersect1, normal1; double d1 = IntersectSphere(p0, p1, s1, intersect1, normal1); Vec3 intersect2, normal2; double d2 = IntersectSphere(p0, p1, s2, intersect2, normal2); double d = 1; Vec3 finalColor; if (d1 < d2) { d = d1; finalColor = l.CalculateDiffuse(intersect1, normal1, s1.color); } else { d = d2; finalColor = l.CalculateDiffuse(intersect2, normal2, s2.color); } if (d < 1) { DrawPixel(x, y, finalColor); } else { DrawPixel(x, y, s_bgColor); }
The final pixel color is the diffuse component of the closest object to the camera:
Depth Test Rendering
We can slightly teak the code so that we can visualize the depth value from the intersection calculations. Note that the value lies between 0 and 1Vec3 intersect, normal; double d1 = IntersectSphere(p0, p1, s1, intersect, normal); double d2 = IntersectSphere(p0, p1, s2, intersect, normal); double depth = d1 < d2 ? d1 : d2; Vec3 finalColor(1 - depth, 1 - depth, 1 - depth); DrawPixel(x, y, finalColor)
In the end result, lighter means closest to the camera, and darker means farther.
Comments