Path Tracer

Here are the projects from CS7650, taught Summer 2004 at the University of Utah, by Peter Shirley. This project is not yet complete.


Anti-Aliasing

Program 1 was a simple demonstration of anti-aliasing in ray-tracing. The geometry consists of an infinite checkerboard, colored alternatively black or white based on a two dimensional oscilating tringonometric function.

Rays were sent out from the eye, all passing through a screen rectangle, and then either hitting the geometry, or traveling too far without hitting anything. With only an infinite plane, the hit function can be generalized to work more quickly and accurately. Specifically, any ray going from the eye through the plane that is not parrallel with the tangent or binormal vectors of the plane WILL hit the plane. With a ray, a+tb, where a is the vector pointing to the eye, and b is a vector pointing to part of a pixel on our screen, we only need to take solutions where t is positive.

This is really beside the point of our first program, but it is still interesting to think about how exactly you go about intersecting a ray with an infinite plane. The point of the program was to show the incredibly high variance and slow convergence of this kind of stochastic ray tracing.

We took regular interval samples, random samples, jittered sampling, and cubic b-spline filtering each at 1, 9, 25, 100, 10000 samples per pixel. The images were all 512x384 pixels. This means that at our highest sampling frequency, we sent out almost 2 billion rays. Even with this incredibly simplistic model, the rendering with my implementation still took about 12 minutes to complete at the highest frequencies. That is about 2 million rays/second.

The point of listening all of these huge numbers is just to show you what a brute force solution ray-tracing is. It probably isn't true that 10,000 samples/pixel are entirely necessary to render this infinite checkboard perfectly, perhaps you can get away with a fourth or a tenth of that. But still, with such a simple model, if you look at the 100 samples/pixel images in my gallery, you will see how very imperfect they are. Raytracing just needs a lot of samples, that's all there is to it.

 

Regular Samples: Samples taken at regular intervals of a specific pixel. This kind of sampling can often hide, very consistently high-frequency information, by taking samples at intervals which are factors of the regular high frequency information.

Random Samples: Samples taken at random points within a single pixel. This method does not spread out sample points as evenly as regular sampling. It is very possible for many samples to bunch up and give redundant information, requiring even more samples.

Jittered Samples: Samples taken at regular intervals, but 'jittered' to place them randomly within a specific area. It is like dividing up any given pixel into sub-pixels, and then sampling randomly in each one of these sub-pixels. This is almost the best of both worlds of regular and random sampling, and the only thing I know of that beats it is the much more complicated poisson disk sampling, which ensures all samples are a certain distance apart, but still randomly placed within a pixel. Although, this algorithm is way too complicated for my tastes. The problem with Jittering is that, like random sampling, it still creates high frequency data from low frequency information. If you look at my Program 1 images, you will see that the Jittered sample at 1 or 9 samples/pixel still create noise very close, which is completely absent in the regular sampling images. Although, the regular sampling image have the more annoying problem of creating patters with the distant part of the plane which certainly do not exist.

Cubic-Bspline filter: Can be used on any of these sampling methods. A cubic b-spline can be implemented by summing together three randomly generated numbers in the range (-0.5,0.5). Probabilistically, these sums will produce a result with an expected value of 0, surrounded by a smooth bell curve probability density function. This has the effect of taking any regular sampling intervals, and perturbing them slightly, with less and less probability the further away you get from your regular sampling intervals.

Program 1 images

 

{mospagebreak title=2. Thin-Lens Camera}

Thin-Lens Camera

Program 2 was an implementation of a simple thing lens camera. The same screen rectangle was used to create ray directions, as in the last program. However, the eye, or the origin of the ray is now perturbed slightly so as to model the depth of field effects that are unnavoidable when doing anything but pin-hole photography.

A very thin lens is modeled at the eye. Every ray which originates on this thin lens and goes through the image plane at a specific pixel, is sampling 3d space for that specific pixel. If your pixels are now seeing the world with these perturbed origin-rays, you can imagine that anything beyond this specific image rectangle will get more and more blurry the further away things are. Also, objects close to the eye and in front of our image plane will also appear blurry, as they are also being sampled by rays which do not form a straight line from the center of the eye, through the screen.

This is precisely the depth-of-field effect we are after. While it does not physically model an actual camera lens, and lacks some of the effects of real photography, it is good enough for most purposes.

The Thin-Lens-Camera is sampled at an even density throughout. This is accomplished by taking two random numbers on [0,1) and multiplying one by two pi, and by taking the square root of the other and then multiplying it by the thin-lens-camera radius. The first number becomes the azimuthal angle phi of our thin-lens disc, and the other tells us how far away from the center to sample. This second number must first be square-rooted, otherwise, there will be too many samples close to the center of our disc. Since, as you travel from the center to the outside of a sphere the area of a circle covered by a given angle increases quadratically.

Program 2 images

 

{mospagebreak title=3. "Naive" Path Tracing}

Naive Path Tracing

spec.png

Program 3 was the implementation of a "Naive Path Tracer" with support for mirrors, glass, diffuse, and phong reflectance. As in the other two programs, rays were sent out from the eye (or thin-lens-camera), through the image rectangle, and out into the geometry of our 3d-world.

A path tracer takes these rays and bounces them off objects in the world until either a certain number of bounces are reached, or the ray hits a light. The information from this light is then recursively reflected back through the ray's adventure bouncing in the 3d-world, to the screen, where it is recorded at whatever pixel it originated. This is kind of a backwards way of looking at how light transport in the real world really works.

In the real world, light originates at lights and bounces on surfaces and through surfaces until it reaches our eye. Our eyes then interpret whatever changes have taken place to the light along its journey to our eye. Whether it be absorbtion, reflection, refraction, etc. In our naive path tracer each 3d object that is hit will perform some calculation on the light and change it in some way (If the world didn't change light at all, we wouldn't be able to see anything but the same light from all directions).

If a ray from our eye hits an object defined as glass, it will be refracted, and reflected. If a ray hits a mirror, it will be reflected, perfectly about the surface normal. If a ray hits a diffuse surface, it will be absorbed a little and reflected at some random angle, if enough of these random reflections are done at this surface, we can get a good idea of the illumination at that point. If a ray hits a "phong" surface it will be absorbed a little, and then reflected at an angle probabalistically close to the perfect angle of reflection. If after bouncing around, these rays do not hit a light source, they will not add any illumination to the pixel from which they originated. If they do, as I said before, the light will be recursively attenuated until it returns to the eye.

This kind of ray-tracer is very simple, and will give a non-biased solution to the light transport of a given scene. The calculations must be done from the eye to the lights, because you know that every single ray you send out from the eye is a visible one. The same is not true if you were to send out Rays from the various light sources in a scene and hopefully trace them back to the eye.

Program 3 images

« A New Calendar | Home | Ray Tracer »