Ray Tracer

These are my homework projects from CS6620. A class I took in the Spring of 2005 from Steve Parker at the University of Utah. Each page represents a single homework assignment.


I guess the one design choice on this assignment was how to distinguish vectors and points. I decided not to. I figure that the only place this distinction matters is under translation. If I am going to translate an object I will just make sure I translate vectors correctly, and points correctly. I don't understand why you would want to go through all of this work of having two separate classes. It doesn't seem like that much of a problem to me.

My renderer can output in ppm, and rgbe. PPM is easy to code, and I had an rgbe writer from another renderer, so it was easy to put in. HDR images are nice for debugging, sometimes. None of these output format are supported in the openGL gui. You just have to hardcode them in. I will fix this in the future. I just have to get used to using opengl to only display pixels and not render anything.

This is a cropped screen shot of my openGL ray-tracing viewer

Performance template

{mospagebreak title=2. Less-Simple Ray Tracer}

For this program I created Shader, Lambertian, Surface, SurfaceList, Light, LightList, Camera, and PinholeCamera classes. I also made Context, HitRecord, and Scene classes, for general rendering infrastructure. My SurfaceList class has a hit function and does a linear hit search of all of the surfaces that are listed in it, when hit is called. Each surface has a shader pointer, and all my shaders will be sub-classes of the shader class. When something is hit, the shading function uses the LightList and colors the pixel. It gets information from the Scene class, the Context class, and the HitRecord class. The Scene class holds pointers to the surfaceList, lightList, etc.

This infrastructure has worked quite well for this scene, and I think it will work well for more complex scenes. I will be able to add more Shaders and more surfaces without having to change very much architecture.

Here are my two required images.

Here you can see my "creative" image. I have just added a bunch of spheres and another light, and made everything pretty symettrical. Also, I used my interactive viewer to take all of these images. You can see a screenshot of it below. As you can see, there are no widgets or sliders or anything. The only way to interact, right now, with it, is through the keyboard (for file writing), and with the mouse. The left mouse button moves the viewpoint, the middle one zooms in and out, and the right one rotates around the look-at point (used by holding down these buttons). I used GLUT, and the glut timer function to periodically update the image, every 30 ms. I still haven't done a pseudo-random pixel render-order. That is on my to-do list.

Performance Template

{mospagebreak title=3. Height Fields}

Required Image

Creative Image/ Extra Credit

You should be able to recognize this little piece of terrain. It is looking south at the point of the mountain. My hometown is off to the foreground and right in this image (Riverton). The 'sun' is a disk. That is why the shading is flat, and the 'moon' is a sphere. My little flying machine is made up of a triangle, a sphere, and a box. This particular height field is 1375x1044.

In this assignment, I added box, triangle, disk, ring, and heightfield to my renderer. I made disk and ring separate classes, though I don't think it matters either way. I also implemented bilinear patches to use in conjunction with my height fields, although they could be used independently. I used the box and bilinear patch intersections that were linked from the class webpage. I am using barycentric coordinates for my triangle intersection. My scenes are represented by a scene class which holds a surface list, light list, camera, and everything else needed to render any particular scene. The scene is initialized with a member function which I just hardcode all the details into, and thus have to change for every new scene.

Performance Template

{mospagebreak title=4. Shaders}

Required Image

Creative Image

Extra Credit

L->R Lambertian, Phong, Cook-Torrance (with Varianing rms, color, and diffuse/specular/ambient terms

The extra credit shader I implemented is a Cook-Torrance shader. I implemented it straight off the 1982 paper. It worked out pretty well, and I even used the real Fresnel term and not the Schlick approximation. The rms term for microfacet distribution controls well how tight the highlight is. I figure since C-T can look quite diffuse and quite specular, I would include both a lambertian shader and a phong shader in my comparison images.

So I have added 4 shaders in this assignment: Phong, dielectric, and metal shading. I pretty much implemented the things right off of the class slides. Originally, my dielectric shader was just adapted from another renderer. It even already had Beer's law implemented. I had problems though, so I just started from scratch and followed the design in the class slides. I didn't implement any of the enhancements discussed in class. I may think about implementing the 'inside of' flag, but I'm not sure how much of a speed improvement it will make once I have an efficiency structure in place. I didn't implement the tree pruning, because I think, later, I will change this renderer into a path tracer. Or, I just think if the rendering is over a half second or so, it is already beyond interactive, how much, doesn't seem to matter too much anymore.

Performance Template

{mospagebreak title=5. Anti-Aliasing}

1 sample/pixel

9 stratified samples/pixel, Triangle Filter, 2 pixels wide

9 stratified samples/pixel, Sinc Filter, 2 pixels wide

I didn't hardcode any of this into my raytracer. I just added a couple loops to my render loop and added some temporary code for the sin function image. I didn't want to create the infrastracture to reuse samples, and I think I will just stick with the box-filter for my renderer if I am not going to reuse samples, and it is pretty easy to make an implicit box filter without having to add any extra rendering architecture. For these 9 sample/pixel images, I just did 49 samples/pixel, and then tested if the jittering made them fall within the 2 pixel wide filter (I guess the samples in the corners have a 1/4 chance of being included, and the side pixels have a 1/2 chance of being included. Which means I averaged 25 (interior) + 10 (out of 20 side) + 1 (out of 4 corner) = 36 samples/pixel (2 pixel wide filter -- still the 9 samples/pixel density))

My image is just my creative image from the last assignment. With the anti-aliasing, it looks a lot better. What I would be interested in seeing is a rendered image where you can tell the difference between a high samples/pixel box filter and maybe a high samples/pixel sinc filter. It is pretty easy to see with this well-behaved sin function. I just want to know that it is going to make a difference for a non-trivial number of standard renderings.

Performance Template

{mospagebreak title=6. Acceleration Structures}

Required Image

Creative Images

I decided to implement a BVH. I was going to finish a grid too, but I had some problems, and won't have time to finish it this week. My BVH is a pretty standard binary tree, where each node has a bounding box, and pointers to two other surfaces. It is built with a standard top-down approach, by splitting the primitives into two groups along either the x, y, or z axis, and at each level of the tree this axis changes. This process recurses until there are only either 1 or 2 primitives left, at which point, the tree is built.

My creative image is a model a friend built of his wifes wedding ring. It works out to be about 100,000 triangles. I used maya to split the model into three different models, so that I could apply separate shaders to each one. However, the surrounding stones turned out to use the pretty much the same shader as the center stone, since I realized that this renderer does not have Beer's law implemented, so I couldn't make my stones different colors. They use different indexes of refraction, however, not that this is that noticable. Even with these issues, I still think the results are pretty cool. I'm not sure my friend ensured that all of these surfaces have outward facing normals, but it looks alright to me.

Performance Template

{mospagebreak title=7. Texture Mapping}

Required Image

Creative Images

I only added 2 new shader classes for this assignment: Checkerboard and MarblePhong. I also added a Texture class and a few texture sub classes, one to look up colors from an image, and one to return a solid color. In the end, however, I didn't end up using the solid color texture class. Instead, whenever I wanted to add the ability to extract colors from an image in a shader, I just added a texture pointer and set the default value to NULL in the constructor. I then have a branch in my shader to see whether or not this Phong/Lambertian/etc. shader is using a texture. I am not sure if this branching is faster than the virtual function look-up, but it was at least easier to implement.

For my creative image I decided to do a displacement mapping. I couldn't find an earth elevation image that would be easy to use for displacement mapping the earth textures we have, so instead, I used the night-lights image as a displacement map. I figure that the brightness of the area is pretty indicative of the population density. So that is what this is an image of. I think I could have gotten slightly better results if I first would have blurred the night lights image so that the data was more spread out and didn't make these very sharp peaks.

Performance Template

{mospagebreak title=8. Volume Rendering}

Required/Extra Credit Image

So it's not right, is it? I can't figure it out. I have been through each piece of code line by line at least 20 times, but to no avail. However, I did get my shadow algorithm to work, so hopefully that will make up for my colors being off.

I followed the implementation in the slides almost exactly (obviously not exactly enough). However, instead of putting my shader in a box, I made a new surface just for volume shaders. This surface had a bounding box and a special shadowHit function. The regular hit function just calls my VolumePhong shader after the bounding box has been hit. Also, since my bounding box has a function which returns both the near and far hit points I don't need to call the intersect routine twice. My shadowHit function in my Volume surface class calls a simplified attenuation function in the VolumePhong shader class. If the ray is attenuated to a dark enough opacity, then true is returned for the shadow hit function, otherwise false is returned. The VolumePhong/ColorMap classes were all done according to the class slides and the functions already written on the class webpage.

Performance Template

VolumePhong

{mospagebreak title=9. Instancing}

Creative Image

Extra Credit Image

For this assignment all I had to do was create a new Instance class. My instance class behaves as described in the class slides, it is just a general instance class that stores a 4x4 Matrix. My infrastructure does not care about unit-length ray directions, so I didn't have to do anything special there. I just multiplied my ray origin by the entire matrix, and my ray direction by the upper 3x3 portion of the matrix. Then once my hit routine returned my normal and hit point, I multiplied the normal by the inverse of this upper 3x3 matrix, and the hit point by the inverse of the whole matrix.

For the extra credit image I realized my triangle class is pretty bloated (with per-vertex normals, uvs, etc), and 100 bunnies can't even fit into memory. For the second part, all I could fit were 40. The reason why my load time is so slow in the set-up time on my pre-computed performance template is that I just read the same file in 100 times, and it is an ASCII ply file, so it is quite slow. I also left the computation of per-vertex normals in my PLY reader, so this slowed the thing down even more. I was just too lazy to write a routine that takes a surface and multiplies it by a matrix, so I had to rely on the matrix that can be passed to my PLY reader.

Performance Template - Instancing

Performance Template - Pre-computed

{mospagebreak title=11. Monte-Carlo Ray Tracing}

Creative Image

prog10a.png

Extra Credit Images

My creative image is a pretty standard-looking monte-carlo ray-traced image, except I used two shaders we have been using in our other programs. I used the phong shader on the texture-mapped globe, and I used our metal shader on half of the checkerboard. The other half of the checkerboard is my monte-carlo metal shader, so that the differences between sharp and blurry reflections can easily be seen. The other two spheres in the image are monte-carlo lambertian shaders. On the red sphere, you can see some color bleeding from the globe. All I did to create this image was add shaders to my infrastructure, shaders which recursively call my trace function instead of explicitly shading from point lights. This particular image is 400 samples/pixel.

My extra credit images show of a depth-of-field effect, easily created with monte-carlo ray tracing. I had already implemented a thin-lens camera model when I implemented my pinhole camera, so I didn't have to add anything for my extra credit images. You can see the three different levels of focus in both the triangles, and in the checkerboard plane. My extra credit images are 100 samples/pixel.

{mospagebreak title=12. Final Project}

Creative Images

The first image is my backdrop for what was going to be a nice looking rainbow, something I spent 24 hours trying to get working last weekend. The ground is a texture-mapped heightfield. The heightfield is just elevation data from the web, while the texture is an aerial photo of the same area. The background is just a cylindrical map that I referenced into. I used it for shading but not lighting. There is a directional light where the sun is.

The second image shows a monte-carlo glossy surface, and environment map lighting.

The third image shows a full spectral rendering of the same scene, however, now you can see small little rainbows in the diamond. It definately adds something substantial that was missing before. The color matching isn't perfect because, in order to interface with rgb textures, I have just associated the rgb values with bins of wavelengths 400-500 for blue, 500-600 for green and 600-700 for red. Even with the crudity of this system, the color matching is still decent.

I had added quite a bit to my renderer in order to do a rainbow, but none of it amounted to any decent images, so I decided to show a few images to show the current state of the renderer, and I think they, together encompass everything this renderer can do right now.

« Path Tracer | Home | Scientific Visualization »