$24
In this assignment, you will rst implement a basic ray tracer. As seen in class, a ray tracer sends a ray for each pixel and intersects it with all the objects in the scene. Your ray tracer will support orthographic and perspective cameras and several primitives (spheres, planes, and triangles), as well as several shading modes and visualizations (constant and Phong shading, depth and normal visualization). You will also extend the implementation with recursive ray tracing for shadows and re ective materials.
Requirements (maximum 15p) on top of which you can do extra credit
0. Displaying image coordinates (0.5p)
1. Generating rays; ambient lighting (1.5p)
2. Visualizing depth (1p)
3. Perspective camera (1.5p)
4. Phong shading; directional and point lights (3p)
5. Plane intersection (1p)
6. Triangle intersection (1.5p)
7. Shadows (1.5p)
8. Mirror re ection (1.5p)
9. Antialiasing (2p)
• Getting Started
This time in addition to being a graphical OpenGL application, the assignment can be run as a console application that takes arguments on the command line, reads a scene le and outputs a PNG image. There’s an interactive preview mode that lets you y around the scene and ray trace an image from at any point, but for any test you want to repeat, it’s more convenient to use a test script. We supply such scripts in the exe folder; you can copy and modify them to make your own. As usual, you also get an example.exe binary whose results you can (and should!) compare to yours. The scene les are plain
1
text, so you can easily open and read them to know exactly what a particular scene is supposed to be and what features it uses. You won’t have to write any loading code.
The interactive preview lets you move around using the WASD keys and look around with the mouse by dragging with the right mouse button down. There are buttons and sliders for most of the important ray tracer and camera settings. Pressing enter renders an image using the ray tracer from the current viewpoint and using all of the current settings. Pressing space toggles between the ray traced image and interactive preview. If you click anywhere on the ray traced image, a debug visualisation ray is traced and its path (and intersection normals) appears as line segments in the preview { this is especially useful when debugging geometry intersections, re ections and refractions.
The scripts work as follows: render default.bat renders a single scene with the default settings. You can also drag the scene les on the bat le to start it. render options.bat also renders a single scene, but it takes a set of arguments so you can change the rendering settings. render all.bat uses these and contains manually tweaked details for some scenes. You can add or change some of the calls in it to render a speci c scene in higher resolution or add a test scene of your own, for example. The ./exe folder includes versions of the scripts that call the model solution and render into the same folder as your code but with di erent le names so you can compare the results.
If you plan on doing extra credit, it’s a very good idea to create a test scene where your new feature is clearly visible (and maybe have two render calls to it; one with and one without the feature). These already exist for some of the recommended extras. Also add the scenes to your render all.bat.
The batch les assume that you have an exe le compiled in x64 release mode. Switch that on in visual studio, or edit the exe path accordingly (.bats are just text les).
Note that this assignment is more work than the previous ones. Please start as early as possible.
• Application structure
The render function in main.cpp is the main driver routine for your application.
The base code is designed to render individual scanlines in parallel on multiple processor cores by using OpenMP. It will speed up the rendering by a lot and we recommend you use it. It is initially disabled to prevent any surprises to you; to enable it, uncomment the line #pragma omp parallel for inside main.cpp and go to Project -> Properties -> Configuration -> C/C++ -> Language and set OpenMP support on. If you choose to enable multithreading, you have to keep your own code thread-safe. All the requirement code should be thread-safe if you write it naturally, but with some extras like Film/Filter, special care will be needed to keep them thread-safe. If you get strange bugs, rst try disabling multithreading to rule it out as a cause.
The di erent shape primitives are designed in an object hierarchy. A generic Object3D class serves as the parent class for all 3D primitives. The individual primitives (such
2
as Sphere, Plane, Triangle, Group, and Transform) are subclassed from the generic Object3d.
We provide you with a Ray class and a Hit class to manipulate camera rays and their intersection points, and an abstract Material class. A Ray is represented by its origin and direction vectors. The Hit class stores information about the closest intersection point and normal, the value of the ray parameter t and a pointer to the Material of the object at the intersection. The Hit data structure must be initialized with a very large t value (such as FLT MAX). It is modi ed by the intersection computation to store the new closest t and the Material of intersected object.
• Detailed instructions
R0 Displaying image coordinates (0.5p)
As an intro, we’ll render a simple combination of color gradients to introduce the structure of the renderer in this assignment. Instead of preparing a list of attributes to be sent to the GPU for realtime rendering, we’ll be forming images pixel by pixel on the CPU.
The image we’ll be generating is this:
The color value of each pixel in the image is a linear function of its coordinates in the image. If the coordinate system of the image is such that the top left corner is at (0; 0) and bottom right corner is at (1; 1), we map the x coordinate to the red channel, the y coordinate to the green channel and keep the blue channel at 1. You should implement this mapping in the function render in main.cpp. Familiarize yourself with the loop structure in render and set the sample color to match the de nition of this image if args.display uv is true.
To test your solution, you should open the scene le r0 empty.txt and press Enter or the Raytrace button, or run the command line version with the ag uv. You can also
3
enable the UV rendering via a button in the UI or pressing 0 in any scene (this is the default mode for empty scenes, hence r0 empty).
Note that your solution might appear tiled; this is due to the default Downscale Factor of 16. We expect each pixel of our images to take some time to compute so we render a lower resolution image to get a quicker preview. You can drag the Downscale Factor slider on the right down to 1 to see a smooth end result, but even this simple image will take a bit to render.
R1 Generating rays; ambient lighting (1.5p)
Our rst task related to ray tracing is to cast some rays into a simple scene with a red sphere and an orthographic camera (r1+r2 01 single sphere).
The virtual Camera class is derived by OrthographicCamera and PerspectiveCamera.
The Camera class has two pure virtual methods:
virtual Ray generateRay(const Vec2f& point) = 0; virtual float getTMin() const = 0;
The rst is used to generate rays for each screen-space coordinate, described as a Vec2f. The direction of the rays generated by an orthographic camera is always the same, but the origin varies. The getTMin() method will be useful when tracing rays through the scene. For an orthographic camera, rays always start at in nity, so tmin will be a large negative value.
An orthographic camera is described by an orthonormal basis (one point and three vec-tors) and an image size (one oat). The constructor takes as input the center of the image, the direction vector, an up vector, and the image size. The input direction might not be a unit vector and must be normalized. The input up vector might not be a unit vector or perpendicular to the direction. It must be modi ed to be orthonormal to the direction.
The third basis vector, the horizontal vector of the image plane, is deduced from the direction and the up vector (hint: remember your linear algebra and cross products).
4
The origin of the rays generated by the camera has a range of the whole image plane.
The screen coordinates of the plane vary from ( 1; 1) ! (1; 1). The corresponding world coordinates (where the origin lives) vary from center (size up)=2 (size horizontal)=2 ! center + (size up)=2 + (size horizontal)=2.
The camera does not know about screen resolution. Image resolution is handled in your main loop. For non-square image ratios, just crop the screen coordinates accordingly.
Implement the normalizedImageCoordinateFromPixelCoordinate method in Camera as well as OrthographicCamera::generateRay. Because the base code already lets you intersect rays with Group and Sphere objects, you should now see the sphere as a at white circle. To complete the requirement, head to RayTracer::traceRay and add one line of code that creates ambient lighting for the object using the ambient light of the scene and the di use color of the object. After this is done, you should see the sphere in its actual color:
Figure 1: r1+r2 01 single sphere: colors, depth, and normals
There is also another test scene with ve spheres that end up overlapping each others in the picture frame:
Figure 2: r1+r2 02 ve spheres: colors, depth
R2 Visualizing depth (1p)
In the render function, implement a second rendering style to visualize the depth t of objects in the scene. Depth arguments to the application are given as -depth 9 10 depth file.png, where the two numbers specify the range of depth values which should
5
be mapped to shades of gray in the visualization (depth values outside this range should be clamped) and the lename speci es the output le. The depth rendering can be performed simultaneously with normal output image rendering.
See the ready-made scripts in exe folder for details and good depth values for a few scenes. Feel free to ll your own.
Note that the base code already supports normal visualization. Try the visualization of
surface normals by adding another input argument for the executable -normals <normal file.png> to specify the output le for this visualization. This may prove useful when debugging shading and intersection code.
R3 Perspective camera (1.5p)
To complete this requirement, implement the generateRay method for PerspectiveCamera.
Note that in a perspective camera, the value of tmin has to be zero to correctly clip objects behind the viewpoint.
Hint: In class, we often talk about a \virtual screen" in space. You can calculate the location and extents of this \virtual screen" using some simple trigonometry. You can then interpolate over points on the virtual screen in the same way you interpolated over points on the screen for the orthographic camera. Direction vectors can then be calculated by subtracting the camera center point from the screen point. Don’t forget to normalize! In contrast, if you interpolate over the camera angle to obtain your direction vectors, your scene will look distorted - especially for large camera angles, which will give the appearance of a sheye lens. (The distance to the image plane and the size of the image plane are unnecessary. Why?)
6
Figure 3: r3 spheres perspective: a familiar scene again with a perspective camera (colors and normals)
R4 Phong shading; directional and point lights (3p)
Provide an implementation for DirectionalLight::getIncidentIllumination to get directional lights.
Implement di use shading in PhongMaterial::shade.
Extend RayTracer::traceRay to take the new implementations into use. The class variable scene (in RayTracer) is a pointer to a SceneParser. Use the SceneParser to loop through the light sources in the scene. For each light source, ask for the incident illumination with Light::getIncidentIllumination.
Di use shading is our rst step toward modeling the interaction of light and materials. Given the direction to the light L and the normal N we can compute the di use shading as a clamped dot product:
d =
(
L N if L N > 0
• otherwise
If the visible object has color cobject = (r; g; b), and the light source has color clight = (Lr; Lg; Lb), then the pixel color is cpixel = (rLrd; gLgd; bLbd). Multiple light sources are handled by simply summing their contributions. We can also include an ambient light with color cambient, which can be very helpful for debugging. Without it, parts facing away from the light source appear completely black. Putting this all together, the formula is:
X
cpixel = cambient cobject + clamp(Li N) clight cobject i
Color vectors are multiplied term by term. (The framework’s vector multiplication is already implemented as element-wise multiplication, so c object * c pixel is enough). Note that if the ambient light color is (1; 1; 1) and the light source color is (0; 0; 0), then you have constant shading.
Implement Phong shading in PhongMaterial::shade
7
Implement PointLight::getIncidentIllumination
Directional lights have no fallo . That is, the distance to the light source has no impact on the intensity of light received at a particular point in space. With point light sources, the distance from the surface to the light source will be important. The getIllumination method in PointLight, which you should implement, will return the scaled light color with this distance factored in.
The shading equation in the previous section for di use shading can be written
X
I = A + Di;
i
where
• = cambient cobject dif f use
Di = clighti clamp(Li N) cobject dif f use
i.e., the computed intensity was the sum of an ambient and di use term.
Now, for Phong shading, you will have
Xi
I=A+
Di + Si
where Si is the specular term for the ith light source:
Si = clighti ks (clamp(v ri))q
Here, ks is the specular coe cient, ri is the ideal re ection vector of light i, v is the viewer direction (direction to camera), and q is the specular re ection exponent. ks is the specularColor parameter in the PhongMaterial constructor, and q is the exponent parameter. Refer to the lecture notes for obtaining the ideal re ection vector.
Figure 4: r4 di use ball, r4 di use+ambient ball: only di use, and both di use and ambient shading, respectively
8
Figure 5: r4 colored lights: three di erent directional lights shading a white sphere
Figure 6: r4 exponent variations: spheres with di erent specular exponents. The light is coming from somewhere in the top right.
Figure 7: r4 exponent variations back: spheres with di erent specular exponents, back side. Note that you’ll have to disable shadows if rendering from GL window to get this look.
R5 Plane intersection (1p)
Implement the intersect method for Plane.
With the intersect routine, we are looking for the closest intersection along a Ray,
9
Figure 8: r4 point light circle, r4 point light circle d2: constant and quadratic attenua-tion, respectively. Note that this scene requires transforms as well as sphere, plane and triangle intersections. The light on the plane will look the same without transforms as well, so pay attention to that.
parameterized by t. tmin is used to restrict the range of intersection. If an intersection is found such that t > tmin and t is less than the value of the intersection currently stored in the Hit data structure, Hit is updated as necessary. Note that if the new intersection is closer than the previous one, both t and Material must be modi ed. It is important that your intersection routine veri es that t >= tmin. tmin depends on the type of camera (see below) and is not modi ed by the intersection routine.
You can look at Group and Sphere intersection code to see how those are implemented.
Then implement the intersect method for Plane. Test with r5 spheres plane. The R4 point light circle scene includes also a plane. Note that the preview approximates the plane with a nite square; it will look di erent than the ray traced plane.
Figure 9: r5 spheres plane: a familiar scene again with a plane as a oor. Colors, depth, and normals.
R6 Triangle intersection (1.5p)
Implement the intersect method for Triangle. Simple test scenes are e.g. the plain cube scenes, and more complicated ones (many many triangles) are the bunnies. The R4 point light circle includes also boxes that are composed by triangles, so you can try it too; it veri es also the shading result.
Use the method of your choice to implement the ray-triangle intersection: general poly-
10
gon with in-polygon test, barycentric coordinates, etc. We can compute the normal by taking the cross product of two edges, but note that the normal direction for a trian-gle is ambiguous. We’ll use the usual convention that counter-clockwise vertex ordering indicates the outward-facing side. If your renderings look incorrect, just ip the cross product to match the convention.
Figure 10: r6 bunny mesh 200 and ... 1000: Bunnies consisting of 200 and 1000 triangles, respectively.
Figure 11: r6 cube orthographic, r6 cube perspective: a cube (consisting of just triangles) with the two di erent cameras
R7 Shadows (1.5p)
Next, you will add some global illumination e ects to your ray caster. Once you cast secondary rays to account for shadows and re ection (plus refraction for extra credit), you can call your application a ray tracer. For this requirement, extend the implementation of RayTracer::traceRay to account for shadows. A new command line argument -shadows will indicate that shadow rays are to be cast.
To implement cast shadows, send rays from the hit point toward each light source and test whether the line segment joining the intersection point and the light source intersects an object. If so, the hit point is in shadow and you should discard the contribution of that light source. Recall that you must displace the ray origin slightly away from the surface, or equivalently set tmin to some . You might also want to add the shadow ray to the debug visualisation vector to make it visible among the other rays.
11
Figure 12: r7 simple shadow: light coming from the above
Figure 13: r7 colored shadows: three di erent lights casting somewhat intersecting shad-ows
R8 Mirror re ection (1.5p)
To add re ection (and refraction) e ects, you need to send secondary rays in the mirror (and transmitted) directions, as explained in lecture. The computation is recursive to account for multiple re ections and/or refractions.
In traceRay, implement mirror re ections by sending a ray from the current intersection point in the mirror direction. For this, you should implement the function:
Vec3f mirrorDirection(const Vec3f& normal, const Vec3f& incoming);
Trace the secondary ray with a recursive call to traceRay using a decremented value for the recursion depth. Modulate the the returned color with the re ective color of the material at point.
We need a stopping criterion to prevent in nite recursion - the maximum number of bounces the ray will make. This argument is set like so: -bounces 5. When you make a recursive traceRay call, you need to remember to decrement the bounce value.
The ray visualisation might be very useful for debugging re ections. In the preview application, click on the rendered result and y around the scene to see how the rays have bounced.
The parameter refr index in traceRay is the index of refraction for a material, needed for extra credit.
12
Figure 14: r8 re ective sphere: shown here four di erent levels of re ections (0, 1, 2, and 3 bounces) with weight 0.01
R9 Antialiasing (2p)
Next, you will add simple anti-aliasing to your ray tracer. You will use supersampling and ltering to alleviate jaggies and Moire patterns.
For each pixel, instead of directly storing the colors computed with RayTracer::traceRay
into the Image class, you’ll compute lots of color samples (computed with RayTracer::traceRay) and average them.
You are required to implement simple box ltering with uniform, regular, and jittered sampling. To use a sampler provide one of the following as additional command line arguments:
-uniform samples <num samples> -regular samples <num samples> -jittered samples <num samples>
The box lter has already been implemented. You should implement the sampling in
UniformSampler::getSamplePosition RegularSampler::getSamplePosition JitteredSampler::getSamplePosition
In your rendering loop (render in main.cpp), cast multiple rays for each pixel as specifed by the <num samples> argument (the innermost for loop iterates as many times as the num samples speci es). If you are sampling uniformly, the sample rays should be dis-tributed in a uniform grid pattern in the pixel. If your jittering samples, you should add a random o set (such that the sample stays within the appropriate grid location) to the uniform position. To get the nal color for the pixel, simply average the resulting samples.
13
Figure 15: r9 sphere triangle 200x200
Figure 16: r9 sphere triangle, just 9x9 resolution: none, uniform, regular, and jittered sampling (sample count 9)
• Extra Credit
Some of these extensions require that you modify the parser to take into account the extra speci cation required by your technique. Make sure that you create (and turn in) appropriate input scenes to show o your extension.
4.1 Recommended
Implement refraction through transparent objects. See handout and code comments around where re ection is implemented. For full points, make sure you handle total internal re ection situations. (1-2p)
Add simple fog to your ray tracer by attenuating rays according to their length. Allow the color of the fog to be speci ed by the user in the scene le. (1p)
Add other types of simple primitives to your ray tracer, and extend the le format and parser if necessary. At least provide implementations for Transform and Box; there are skeletons for them in the base code. (3p)
Make it possible to use arbitrary lters by lling in the addSample function of the class Film. Its function is to center a lter function at each incoming sample, see which pixel centers lie within the lter’s support, and add the color, modulated by the lter weight, into all those pixels. Further, accumulate the lter weight in the
14
4th color channel of the image to make it easy to divide the sum of weights out once the image is done. Demonstrate your ltering approach with tent and Gaussian lters. Be aware that the trivial Filter/Film implementation is not thread-safe. You have to disable OpenMP, or for more points, gure out how to keep your code thread-safe and make it concurrent. (1-3p)
Implement stereo cubemap rendering for your ray tracer. Stereo cubemaps can be used to display a 3d view you can freely look around in using a VR headset. Your task is to write code that generates a cubemap texture le that can be rendered using a third-party application, such as vizor.io, or you can create your own stereo renderer for further extra credit. Note that you won’t necessarily need a VR head-set, as you can view the cubemaps with a non-stereo display as well.
Your stereo cubemap should consist of two cubemaps rendered from slightly o set viewpoints matching the o set between human eyes, to enable the viewer to experi-ence depth perception in the scene. The produced cubemap should be a single png le with 12 square tiles, each describing a single face of the cube map. The axis layout of a single cubemap can be seen in the image below. For stereo cubemaps your image should repeat the axes twice, rst for the left eye, then the right. There are other layout standards as well, but at least vizor.io uses the one described here. For more information on cubemaps, you can read the Wikipedia article, and you can nd example stereo cubemaps from https://render.otoy.com/vr gallery.php/.
You can use this scene, created with one of the example cubemaps, as your starting point for viewing your cubemaps. You can just upload your own cubemap into the stereo cubemap object, which you can nd in the scene tree under the Program tab in the editing window. You should include a link to your own scene in your readme le, or include the produced cubemap texture in your submission folder. (4+p)
4.2 Transparency
Given that you have a recursive ray tracer, it is now fairly easy to include transparent objects in your scene. The parser already handles material de nitions that have an index of refraction and transparency color. Also, there are some scene les that have transparent objects that you can render with the sample solution to test against your implementation.
Enable transparent shadows by attenuating light according to the traversal length through transparent objects. We suggest using an exponential on that length. (1.5p)
Add the Fresnel term to re ection and refraction. (1p)
15
4.3 Advanced Texturing
So far you’ve only played around with procedural texturing techniques but there are many more ways to incorporate textures into your scene. For example, you can use a texture map to de ne the normal for your surface or render an image on your surface.
Image textures: render an image on a triangle mesh based on per-vertex texture coordinate and barycentric interpolation. You need to modify the parser to add textures and coordinates. Some features you might want to support are tiling (normal tiling and with mirroring) and bilinear interpolation of the texture. (2-4p)
Bump and Normal mapping: perturb (bump map) or look up (normal map) the normals for your surface in a texture map. This needs the above texture coordinate computation and derivation of a tangent frame, which is relatively easy. The hardest part is to come up with a good normal map image. Produce a scene demonstrating your work. (2-3p)
Isotropic texture ltering for anti-aliasing using summed-area tables or mip maps. Make sure you compute the appropriate footprint (kernel size). This isn’t too hard, but of course, requires texture mapping. (Medium)
Adding anisotropic texture ltering using EWA or FELINE on top of mip-mapping (a little tricky to understand, easy to program). (Easy)
4.4 Advanced Modeling
Your scenes have very simple geometric primitives so far. Add some new Object3D subclasses and the corresponding ray intersection code.
Combine simple primitives into more interesting shapes using constructive solid geometry (CSG) with union and di erence operators. Make sure to update the parser. Make sure you do the appropriate things for materials (this should enable di erent materials for the parts belonging to each primitive). (4-5p)
Implement a torus or higher order implicit surfaces by solving for t with a numerical root nder. (2-3p)
Raytrace implicit surfaces for blobby modeling. Implement Blinn’s blobs and their intersection with rays. Use regula falsi solving (binary search), compute the appro-priate normal and create an interesting blobby object (debugging can be tricky). Be careful for the beginning of the search, there can be multiple intersections. (4-6p)
4.5 Advanced Shading
Phong shading is a little boring. I mean, come on, they can do it in hardware. Go above and beyond Phong. Check this for cool parameters.
16
Cook-Torrance or other BRDF (2p).
Bidirectional Texture Functions (BTFs): make your texture lookups depend on the viewing angle. There are datasets available for this, for example here. (3p)
Write a wood shader that uses Perlin Noise. (2p)
Add more interesting lights to your scenes, e.g. a spotlight with angular fallo . (1p)
Replace RGB colors by spectral representations (just tabulate with something like one bin per 10nm). Find interesting light sources and material spectra and show how your spectral representation does better than RGB. (3-4p)
Simulate dispersion (and rainbows). The rainbow is di cult, as is the Newton prism demo. (3-4p)
17
4.6 Global Illumination and Integration
Photons have a complicated life and travel a lot. Simulate interesting parts of their voyage.
Add area light sources and Monte-Carlo integration of soft shadows. (4-5p)
Add motion blur. This requires a representation of motion. 3 points if only the camera moves (not too di cult), 3 more points if scene objects can have independent motion (more work, more code design). We advise that you add a time variable to the Ray class and update transformation nodes to include a description of linear motion. Then all you need is transform a ray according to its time value.
Depth of eld from a nite aperture. (2-3p) Photon mapping. (Hard)
Distribution ray tracing of indirect lighting (very slow). Cast tons of random sec-ondary rays to sample the hemisphere around the visible point. It is advised to stop after one bounce. Sample uniform or according to the cosine term (careful, it’s not trivial to sample the hemisphere uniformly). (3-5p)
Irradiance caching (Hard).
Path tracing with importance sampling, path termination with Russian Roulette, etc. (Hard)
Metropolis Light Transport. Probably the toughest of all. Very di cult to debug, took a graduate student multiple months full time. (Very Hard)
Raytracing through a volume. Given a regular grid encoding the density of a participating medium such as fog, step through the grid to simulate attenuation due to fog. Send rays towards the light source and take into account shadowing by other objects as well as attenuation due to the medium. This will give you nice shafts of light. (Hard)
4.7 Interactive Editing
Allow the user to interactively model the scene using direct manipulation. The basic tool you need is a picking procedure to nd which object is under the mouse when you click. Some coding is required to get a decent UI. But once you have the mouse click, just trace a ray to nd the object. Then, using this picker for translating objects, and for scaling and rotation. Allow the user to edit the radius and center of a sphere, and manipulate triangle vertices. All those are easy once you’ve gured out a good architecture but requires a signi cant amount of programming. (up to 7p)
18
4.8 Nonlinear Ray Tracing
We’ve had enough of linearity already! Let’s get rid of the limitation of linear rays.
Mirages and other non-linear ray propagation e ects: Given a description of a spatially-varying index of refraction, simulate the non-linear propagation of rays. Trace the ray step by step, pretty much an Euler integration of the corresponding di erential equation. Use an analytical or discretized representation of the index of refraction function. Add Perlin Noise to make the index of refraction more interesting. (Hard)
Simulate the geometry of special relativity. You need to assign each object a ve-locity and to take into account the Lorentz metric. I suggest you recycle your transformation code and adapt it to create a Lorentz node that encodes velocity and applies the appropriate Lorentz transformation to the ray. Then intersection proceeds as usual. Surprisingly, this is not too di cult; that is, once you remember how special relativity works. In case you’re wondering, there does exist a symplectic raytracer
http://yokoya.naist.jp/paper/datas/267/skapps 0132.pdf
that simulates light transport near the event horizon of a black hole. (Hard)
4.9 Multithreaded and Distributed Ray Tracing
Raytracing complicated scenes takes a long time. Fortunately, it is easy to parallelize since each camera ray is independent. We already provide an easy OpenMP implementation for distributing the load on local processor cores, but you can take it further.
Create a raytracer running on the GPU. Since replicating all requirement features would be a huge amount of work, you can make your GPU raytracer separate and give it only a smaller amount of functionality. You can use your choice of API - CUDA, OpenCL, GLSL shaders. We make an exception here and allow you to use technology that is not supported on the classroom computers; if necessary, we’ll call you in to demonstrate the code you submitted. (Medium/Hard)
Distribute the render job to multiple computers in a brute force manner. Split the image into one sub-region per machine, send them o to individual machines for rendering and collect the results when done. (Hard)
4.10 Acceleration Techniques
Use a Bounding Volume Hierarchy to accelerate your raytracer. (Hard)
19
4.11 More anti-aliasing
Add blue-noise or Poisson-disk distributed random sampling and discuss in your README the di erences you observe from random sampling and jittered sampling. (2p)
• Submission
Make sure your code compiles and runs both in Release and Debug modes on Visual Studio 2019, preferably in the VDI environment. Comment out any functionality that is so buggy it would prevent us seeing the good parts.
Check that your README.txt (which you hopefully have been updating throughout your work) accurately describes the nal state of your code. Fill in whatever else is missing, including any feedback you want to share. We were not kidding when we said we prefer brutally honest feedback.
Package all the code, project and solution les required to build your submission, the README.txt and any screenshots, logs or other les you want to share into a ZIP archive. There’s a generate submission.bat that should auto-generate the zip for you when run. Note: the solution le itself does not contain any code, make sure the contents of src/ are present in the nal submission.
Sanity check: unpack the archive into another folder, and see if you can still open the solution, compile the code and run.
Submit your archive in MyCourses folder "Assignment 5: Ray tracing".
20