Scientific Visualization

These are my required homework projects for CS5630, taught at the University of Utah in Fall of 2004 by Ross T. Whitaker.

CS5630 Scientific Visualization - Project 1

This project's main purpose was to become more familiar with VTK, and TCL/TK, as well as gain some skill at hacking data formats.

Part 1 : Height Fields

With this part of the assignment we plotted two different data sets as height fields. The first data set was a fragmented R2->R1 function, with both sets of values stored in separate files. All I had to do was read the two files in and then combine them into a set of 3d points. These points were then triangulated, and a picture was drawn from this 3d data. This part of the assignment was relatively easy, because all I had to do was modify, very slightly, a file on the class webpage. What I ended up with was imageWarp1.tcl, and the images below, which I am not sure whether or not they are correct.

The second set was a 2D gray-scale image. The values at each pixel were used to create a third-dimension of height data. This image took even less editing of a file, again available on the class webpage. The results are below.

Part 2: Contour Maps

This second part of the assignment was much more challenging. With this part of the assignment, reading in the data files and converting then plotting them was the trivial part. The challenging part was then to program a GUI to select one of two data-sets, and to change the values which control the image, interactively. TK is a pretty straight-forward software package, and after a little internet searching, I was able to build a pretty good-looking GUI. I decided to fix the 3d view, seeing as the contour maps are both 2d. This trivial simplification, made for a lot cleaner user interface. One thing I would like to change, but haven't figured out yet, is the initial slider values. I would like them to actually have viewable values when the application is opened up, so that the user doesn't have to change the values just to see an image. I'm sure I will figure out how to do this in future projects.

Again, all the methods used are pretty standard and straight-forward. After a very brief search, I found all of the VTK filters I needed, I just needed to build in the user interface to interact with these filters. The results are at contour.tcl, and a screenshot of the program can be seen to the right.

Questions

Q What properties does a dataset need to have in order to have contour lines that make sense (and contribute to the visualization)?

A It needs to be continuous and fairly smooth in all of its dimensions. Data that doesn't fit this criteria will make contour lines that don't "make sense." A noisy data set with only high-frequency data will produce contour lines which really don't tell you anything since most of this high-frequency data will be lost with contour lines.

Q Where are a few places that these properties can be found?

A The data sets in this project are great examples, especially Mt Hood. I can't think of a more useful (or perhaps widely-used) application of contour lines than in cartography. It also seems like there are many more places where you might find this kind of continuity. Another data set from this project, infra-red radiation is a good source. If you think of how heat dissipates, so evenly through most of the things around us, this makes a lot of sense in a contour-map representation.

{mospagebreak title=2. Color Maps}

Color Maps

Part A - Interactive Color Mapping

In this part of the assignment, I used the Thorax data set created in the last 5630 Project (points.vtk), with the Mt Hood elevation data (stored in a grayscale image), and built a GUI which performed interactive color mapping on this data. This interaction includes changing the color map in either RGB (red, green, blue) or HSV (hue, saturation, value) color space. I chose not to label these sliders individually, as I think it is pretty intuitive how to use them, and leaving them out makes my GUI look much nicer. Also in the GUI, turning the scalar values at each point into a 3d z extrapolation can be turned on or off.

To the left you can see my visualization of Mt Hood, with height field and color map. To create this specific image and color-map, the first step was to read in a grayscale image with VTK. The scalar values at each pixel were then extracted. Geometry was then created by using these scalar values as z coordinates, making a third dimension to this originally flat image. The color map was then created, by taking two values (this particular image was done in RGB color space) and stepping between them with a simple for loop, and interpolating values for each R, G, and B channel linearly. Finally, this newly created color map is aplied to the height field. In vtk functions, the process went like this:
->read file
->vtkImageMagnitude (extract scalars)
->vtkImageDataGeometryFilter(put magnitude into geometry)
->vtkWarpScalar(warp magnitude values by certain amount)
->vtkMergeFilter(Merge geometry with scalars)
->vtkElevationFilter(Set values for color assignment range)
->vtkPolyDataMapper(map color map onto data)

This visualization to the right differs from the Mt Hood image, in that we are now working in HSV color space (This is the default visualization, currently saved by the saveView.tcl file). This image, by limiting the scale only to hue changes, shows well the change between positive and negative, by using mostly two colors, and blending between them in a very distinct, abrupt way. It also differs from the Mt Hood data in the fact that this data did not come from an image, but a set of discrete data samples, which then had to be triangulated to form a solid surface.

What do you think works best, or what are the benefits of the different approaches? (with/without height fields)

It seems to me that for certain data, it helps a lot to convert the data into a height field, especially for data that makes sense as one solid surface (such as the Mt Hood data). The height field in combination with color map shows you elevation information you probably wouldn't be able to visualize as well with just one or the other. The Thorax data doesn't seem to benefit too much from applying the height field, but at the same time, applying the height field does not seem to hinder the data. I think the height field is usually worth just the extra level of interactivity you get by having a third dimension.

Which one is better? What are the differences? (RGB vs HSV)

I find the HSV color model much more intuitive. It seems a lot easier to get a good color scale by using the HSV sliders than by using the RGB sliders. Pretty much any combination of extremes with HSV will give you a good color scale. Also, tweaking the scale to get the image to look better is much easier in HSV. The advantage to RGB seems to be the ease of creating complementary color schemes relatively easily. Like the blue/yellow, magenta/green, etc. scales. It all really depends on your data. However, it seems a lot harder to get a nice discretized-looking scale in RGB, when in HSV all you have to do is make a Hue scale.

Part B - Extra Data set

tell us what your images show in it

I found it relatively difficult to find free scientific visualization data on the internet, but ultimately, I came across a grayscale .jpg of ocean elevation data around the South Atlantic and Indian Oceans. I found a Photoshop plug-in and converted this .jpg to a .pgm file. After that, the process was identical to visualizing the Mt Hood data.

I kind of like the idea of having a blue color for where the ocean is supposed to be, and a more brown color for the land. I think that this yellow-blue color map worked out pretty well in dichotimizing the land and ocean, while still giving a good sense of the depth of the ocean floor in certain areas. I think that if the elevation data for land were not completely flattened out, this visualization would not be nearly as effective. In the end, this visualization is great at identifying the deepest parts of the ocean.

Part C - Bivariate Color Mapping

This part of the assignment involved using a color map which is bivariate. Using a bivariate color map on a data set, allows you to attempt to display an extra dimension of information at each point that is colored. I played around with quite a few bivariate color maps, but couldn't find too many that worked that well (In letting you visualize more data than a normal color map). Since VTK does not support bivariate color maps, you have to sort of hack it. Basically, you just create a very big univariate color map, but treat it like it is just a bivariate map whos rows have been collapsed down into one big long row. All you need to do is just figure out how to index into this one-dimensional array in a two-dimensional way, this is done pretty easily with a pair of nested 'for' loops, with one loop counting rows and the other columns.

document which one is best

I like the bivariate map I used below, because the two colors are easily discerned between, so that when you look at the image you can tell exactly which directions the gradients of Mt Hood's surface are increasing. This kind of color map, where the direction of the variance in color is easily discernable, seems to work much better than a bivariate color map which is more focused on showing correlations between two sets of data. Showing a correlation isn't really necessary at all in this Mt. Hood data set. I think the color map can be improved by mapping this specific data to a greater range of values, perhaps in a non-linear way.

MORE IMAGES:

{mospagebreak title=3. Isosurfaces}

Isosurfaces

Files Submitted:

isoSurface.tcl - Both programs for this assignment in one, with GUI to switch between modes
REPORT.html - this document
*.png - output images

Two Isosurfaces

Here you can see both the skull and face isosurfaces extracted from the volume data. The skin has an alpha value of .55.

Which isosurfaces look the most interesting, and how did you find them?

The isosurfaces that look the most interesting are the ones that show solid, whole structures of the volume data. This is why isosurfaces work so well on this head data set, because the skull can be separated out of the volume data well, and so can the shape of the head. I found these surfaces by creating a slider in my GUI which changed the isosurface currently viewed. This slider can really only be used interactively on the smaller data sets. It is much too slow on the full sized version of the data. I just slid the slider until I found a point where a nice solid structure was visible.

To create these isosurfaces, I just used the vtkContourFilter, and then fed this data into the vtk mapper, then a vtk actor.

Univariate Curvature Color Maps

In these next images I have used univariate color maps derived from curvature data to color the isosurfaces with. The first two images below use only the magnitude of the principle curvatures (i.e. sqrt(k1*k1+k2*k2)). While the second two images use the mean curvature (i.e. 1/2*(k1+k2)). In addition to creating the isosurvaces as above, the vtkProbeFilter function had to be used after the contour filter to apply the color maps to the isosurfaces.

Describe how the two values relate to the shape of the iso surface.

With the curvatures magnitude images the color values on the surface are quite easy to decode. As the colors move from cyan to red on the hue scale, we get increasing curvature magnitude. The closer to red you are on the hue scale, the more "curved" the surface at that point is.

With the mean curvature values, we can tell much more about the surface at these points. I used a hue scale that does not wrap around. From this data we can establish a relationship between k1 and k2. We need the extra physical data the isosurface offers us because with the mean curvature, unique physical characteristics will get mapped to the same color. For example, a flat plane and an equal-but-opposite curvature saddle point will be colored the same. The mean curvature, with my hue map, really discretizes the various areas of this isosurface, so you can get a quantitative idea of curvature behavior on the surface.

What do you think works best, or what are the benefits of the different approaches?

I think with this familiar face model, that the curvature magnitude image works much better than the mean curvature images. I don't think this would be true for an isosurface of something I am not so familiar with, or haven't seen before. Just looking at the 3d rendering of the face and skull, you can get a good idea of where the saddle points, valleys, peaks, etc. are, and the curvature magnitude color map lets you get a good idea of where the surface changes dramatically. The mean curvature maps just add too much mess and redundancy to the data. However, with less familiar data, I think the mean curvature color map would be more helpful. It definately differentiates convex and concave surfaces quite well from each other, something the magnitude color map can't do.

Face Isosurface with Curvature Magnitude Hue Color Map

Skull Isosurface with Curvature Magnitude Hue Color Map

Face Isosurface with Mean Curvature Hue Color Map

Skull Isosurface with Mean Curvature Hue Color Map

{mospagebreak title=4. Volume Rendering}

Volume Rendering

Color Compositing Transfer Functions


Here you can see a comparison between two renderings: one directly rendered from volume data with some transfer function(above), and one where two sets of isosurfaces have been triangulated from volume data, and then rendered(below).

This volume renderer casts rays into the scene, and intersects voxels. The transfer function takes the values at these intersected voxels and assigns a color and opacity to each pixel, as a composite of the transfer function value at every voxel intersected along a ray.

Describe your transfer functions, what relationship do they have to the isovalues shown in your isosurface rendering?

This particular transfer function maps opacity values of 0-0.05 to voxel values 30-70, linearly. It then spikes up to opacity 0.7 at voxel value 100. The color moves linearly between a light brown to white, from voxel values 25.0-100.0.

This transfer function attempts to show the skull isosurface shown in my isosurface rendering, by spiking the opacity right around 100, the value of this isosurface. It also shows some of the mummy wrappings by slowly increasing throughout the range of these isovalues.

Do you think volume rendering the mummy dataset offers a clear advantage over isosurfacing?

For this particular dataset, I do believe that volume rendering shows a clear advantage over isosurfacing. For the last project, the skull and skin were much better defined. Isosurfacing is good at bringing out these solid, well-defined volumes. However, for this mummy data, the skull is much more fractured and un-even. Also, the wrappings around the skull are uneven and hard to see a shape in. The volume rendering of the mummy presents a much more solid, less-noisy image of the skull. You can get a real feel for the shape of the skull. Unlike in the isosurface image, where the surface is noisy and fractured.

Maximum Intensity Projection Renderings

To the right we have a comparison between maximum intensity projection (MIP) volume rendering (above) and composite volume rendering (below).

We have seen composite volume rendering above. Composite volume rendering involves compositing the transfer function value for each voxel along a ray. MIP volume rendering involves just displaying the voxel with the maximum intensity along any given ray.

What are some advantages and disadvantages of MIP versus compositing-based volume rendering?

You can see that in the MIP image you lose a sense of a third dimension. It seems as if the mummy data has been flattened, probably because you can see through everything, and it looks kind-of like an x-ray. However, when you can interact with the MIP rendering, the rendering regains this third dimension.

The MIP image is more clear than the composite volume rendering. If the dense parts of the volume data are what you are interested in, MIP is definately the algorithm you want to use. Basically, MIP cleans up the rendering by emphasizing this one attribute of the data. However, you kind of lose the solid structures of things, since you can see through some of the data. The teeth blend together, as does the skull. Again, though, this seems to be remedied by making the rendering interactive.

What you do lose are interesting non-dense characteristics, like the ears and eyeballs. In some instances, these are probably every bit as interesting as the bone structure.



{mospagebreak title=5. Vector Fields}

Vector Fields

1. Test Volumes

This first test volume clearly shows a source critical point with glyphs.

This second test volume shows a saddle point with streamlines.

This third test volume shows a spiral sink point with streamlines, all three critical points in each test volume are at (15, 15, 15).

2-4. Challenge Volumes - Critical Points

The first challenge volume has 3 critical points.
The second challenge volume has 5 critical points.

VolumeCP#LocationMagnitudeType
00(39, 26, 32)LowCenter
01(25, 15, 19)HighSource
02(17, 25, 14)HighSaddle
10(35, 15, 20)LowCenter
11(55, 15, 20)LowCenter
12(45, 45, 20)LowCenter
13(65, 45, 20)LowCenter
14(20, 30, 20)HighSource

5. Critical Point Visualizations

Volume 0, CP2
For this volume I chose stream lines, since it is a saddle point. I found all of these critical points by moving a small cube with an XYZ GUI widget. Also, I used the VTK PointSrc class to create groups of random points in sphere of space. With these streamlines, the points were then used, with vector data, to do integration with.

Volume 0, CP1
This point is a source. I did not use glyphs, so with this visualization, you have to take my word for it. In a larger visualization, you could spot this critical point by the colors around it. The red objects in this image, as in the last, are the critical points. The small dot off in the distance is the same saddle point pictured above.

Volume 1, CP4
In this critical point from the second challenge volume, I have decided to use glyphs to ensure that you can tell that this is a source critical point, and not a sink. The glyphs use the same point source as the streamlines, however, there is no integration here. Probably, a point is paired up with its nearest vector, and this value is used to create the glyph orientation, length and color.

Volume 1, CP2
Here we have a center node. You can see the box I use to find critical points in this picture. Also, you can see my hue color map as the streamlines are drawn closer and closer to the critical point, where in this case, vector velocity goes to zero.

6. Global Challenge Volume Visualizations

Challenge Volume 0
This first visualization turned out alright, and I'm still not sure how to make it better. Instead of using random points to generate streamlines from, I used two parrallel planes. Together, the two planes capture all three critical points. The lines start out at points evenly placed on each respective plane. The blue streamlines originate from the bottom plane, and the yellow ones from the top. You can see the saddle point off to the left, the center point towards the bottom, and the source point almost at the middle of the image. You also get a good idea for the flow of the entire vector volume.

Here I have added isosurfaces, to attempt to get an even more global idea of what is happening. The isosurfaces represent areas where vector magnitudes have dropped below .8047. The center point is located within the red globe at the bottom of the image. You can see the saddle point directly left of it. Also, you get a good idea of where the flow is quick, and where it is slow.

In this third visualization of the first challenge volume, I have removed some of the clutter. You can still see both the saddle and center critical points, but now you can see how the field travels outside of these points. I created this image from only a single plane generating source points for integration.

Challenge Volume 1

For the second challenge volume, I realized that every single critical point was on the z=20 plane. It seemed quite natural to therefore create all point sources on this specific plane. The visualization for this volume turned out much better than for the first. Here you can see the source with the four center points spinning off down the flow.

This was a pretty flat visualization of this sample z=20 plane. I turned up the number of source points, and down the propogation of the streamlines. What you get is a visualization that stays pretty well within this plane. You can see clearly the center points in red, and the source point in green. Looking back, this visualization may have come off better with glyphs, since then you could get a better idea of the vector flow.

This is the third and last visualization of the second challenge volume. I wanted to see what was happening outside of the z=20 plane, so I set the propogation quite high, and bumped down the number of source points. You can see that as these vectors come off the source, the loop around to all travel in almost the same direction. Also, it seems that after the center points, the vector field descends quite a bit.

In general, getting good vector visualizations is a very difficult task. I found that building features into my GUI was probably the easiest way to improve visualizations. That way, you can see the results of changes interactively. Also, getting a good idea of what source points are going to produce good visualizations is very helpful. The critical points give you a very good guide as to where you should be placing your sourcepoints.

{mospagebreak title=6. Tensor Fields}

Tensor Fields

Strain Tensors

These first two images are just simple isosurfaces of the length volume of this Strain Tensor data set and the vector magnitude volume from the vector field data set, from left to right, respectively. They are both taken at value .7725. If there is some kind of correlation between these two images, I can't quite see it. It does seem like the one on the right could fit into the one on the left, however, it wouldn't be close to a perfect fit. It seems to me that the "length" of these tensors should be more visually correlated with the vector magnitude. Then again, I'm not exactly sure how these vectors relate to the tensors. I assume they are the principle eigenvectors, but I am not sure.

The next three visualization employ both tensor glyphs, and vector glyphs. The tensor glyphs show the magnitude of the three eigenvalues, in relation to each other, as well as the direction of the eigenvectors. The box sizes and orientations encode all of this information. I colored the boxes with the vtkMergeFilter, as directed on the class webpage. The vector glyphs are simply oriented in the direction of the vector, and scaled proportional to the magnitude

This particular critical point is a center point.

This second critical point is a source.

This third critical point is a saddle point. You can see the directions of the strain tensors and vectors change as everything around this critical point goes in, and then everything above and below goes out.

Each of these visualization was created by using vtkPointSource around a critical point. Both the tensor glyphs and vector glyphs used these points. That is why there is a vector glyph paired with every single tensor glyph.

Here I have compared the streamlines from the last project, with "hyperstreamlines" from the tensor data. vtkHyperStreamline would not run on my computer, so I attempted to simulate these hyperstreamlines with the vtktubefilter. Unfortunately, with my simulations, the radius does not change, nor does the orientatiation twist. However, I think this visualization is a little bit more effective than my very cluttered streamline image. You definately get a general feel for the strain tensor flow.

Which tensor visualization technique (isosurfaces of a derived scalar, glyphs, hyperstreamlines) is the most effective, and why?
For this particular strain tensor dataset, I think the hyperstreamlines are the most effective. They can convey a global sense of the tensor data, quite clearly. The tensor glyphs were not too effective without the vector glyphs, and using these glyphs on a global scale did not work much at all. However, for the next volume, glyphs are much more effective than anything else. They seem much more suited for diffusion tensors than strain tensors.

Describe the relationship you see between the strain tensor and the local vector field.
From the critical point images, it seems like the vector field travels through the strain tensors in the direction of the minor eigenvector.

Diffusion Tensors

Here I have visualized the brain diffusion tensor data with vtkTensorGlyphs. Instead of using vtkPointSource, as in the last visualizations, I used vtkThresholdFilter with the anisotropy volume. This allowed me to display glyphs wherever the anisotropy of the diffusion tensor is high. This should filter out all of the highly-directional matter in the brain, called the white matter, where the brain pathways are. The color map is applied according to anisotropy as well. The most anisotropic part is in the center of the brain, where the two lobes are connected.

Here I have used the same tensor glyphs as above, but I have included an isosurface of the very isotropic parts of the brain. I wanted to get a good outline of the outside of the grey-matter in the brain. I have also made this isosurface transparent in the color mapping.

These three visualizations implement all three visualization techniques: tensor glyphs, hyperstreamlines, and isosurfacing. The only addition from the last set of visualizations are my simulated hyperstreamlines. For these, I extracted the major eigenvector from the tensor data to create the streamlines, and then used the vtkTubefilter to volumize the streamlines. Again, the only drawback of this approach is the inability to change radius with changing minor eigenvectors, and the inability to see any twisting of these streamlines. However, they do show, more clearly than the glyphs alone, this very directional area of the brain.

How did you decide where to place the glyphs?
The assignment said that we were only interested in the white matter of the brain. This is the anisotropic part of the brain. We were also given an anisotropy volume, with scalar values derived from the three eigenvalues. I first used isosurfaces with a slider in my GUI to find a value that showed good separation between white and grey matter. I decided 0.5 looked pretty good. Then, I just used a threshold filter on this volume to take out all points below 0.5.

How did you determine where to start the hyperstreamlines?
I saw the highly isotropic area in my other visualizations, and then used my little cube with X, Y, and Z sliders to find the approximate coordinates of this area. I then used the vtkPointSource to create random points in a sphere around this area. My hyperstreamlines then originated from these points.

Why is tensor visualization hard?
I think the main reason why tensor visualization is so hard is because you have so much data. We can barely simulate three dimensional scalar fields of values well with volume rendering. With nine times the data, it makes sense that visualizing tensors would be an order of magnitude more difficult. The more data of each tensor you try to display, the more cluttered your visualization becomes. The more you reduce these tensors to less dimensions for visualizations, the less you can visualize from the tensor. You can't know everything that is going on in a tensor data set with a single visualization. But through interactivity and several sets of visualizations, you can accomplish this task pretty well.

« Ray Tracer | Home | Irradiance Caching »