HIPerWall Expands 3D Capabilities with New Software

By Anna Lynn Spitzer

Irvine, Calif., Oct. 17, 2008 -- Researchers at the UC Irvine division of Calit2 have developed a new way to transform enormous medical datasets into rotating, three-dimensional images, vastly increasing the potential of HIPerWall, the institute’s 200-megapixel display wall.

Gian Mario Maggio
Three-dimensional volume rendering gives researchers
a new view of enormous medical datasets.

The breakthrough volume-rendering software can display CT scans, confocal laser-scanning microscopy and other tomographic image data in 3D, transforming the room-sized visualization wall into what they believe is the world’s largest medical display.

The main improvement over existing techniques was accomplished by implementing a distributed version of a texture-based, direct-volume rendering algorithm with multi-dimensional transfer functions that allow radiologists to arbitrarily change the transparency and color of tissues. "We can make the skin and the brain transparent so that we can see, for instance, a tumor, or we can make blood vessels light up in a certain color," says Joerg Meyer, UCI assistant professor of electrical engineering and computer science. "In the past, user interfaces were mainly designed by computer scientist dealing with abstract numbers. We made the user interface relevant to radiologists and biologists by giving them a tool that automatically finds clusters in the data and suggests tissue-specific transparencies and colors."

HIPerWall, a 50-screen tiled display that allows scientists to view and manipulate huge datasets in extremely high definition, has been displaying two-dimensional images and video since 2005. Now it can provide 400 megavoxels in full 3D, rotating the images to give scientists high-resolution views from all angles. 

Meyer led the effort to design the software that incorporates voxels – volume elements – into the display wall’s capabilities. “What a pixel is in 2D, a voxel is in 3D,” he explains.

Dividing 3D Space
The task was daunting. The software had to be written so it could run on a distributed computing cluster in which each computer processes and renders a small piece of the total image. This undertaking, which is relatively simple in two dimensions, becomes much more complicated in three.

“When you’re working with a two-dimensional image, you just cut the image into tiles and each computer only needs access to the part of the dataset it will display,” Meyer says. But in three dimensions, the image rotates, and its individual pieces will likely move from one screen to another, requiring the data to move between computers. “You have to be able to re-organize your data and shift it between different screens,” he says.

The answer was found in the meshing of two key technologies. The first, “octtree subdivision,” is a hierarchical approach to dividing three-dimensional space into predetermined portions. Volume is divided in half along an ‘x’ axis, creating a top half and a bottom half. It is also divided along the ‘y’ axis, creating left and right halves, and along a ‘z’ axis that divides the front and back halves. Each of those eight sections is subdivided along the same parameters; the process continues for each sub-cube created from the previous partitioning. Empty blocks do not need to be stored or moved, leading to significant data reduction.

The result is the development of tiny “bricks” of information that are easier to manage and move from one computer to another than a set of large slices.

Gradual Rendering
The second and more important key technology supporting the software is known as “wavelet decomposition,” a process that breaks down details in the images, storing them in separate files and rendering the data at different resolutions. This technique keeps the computers from slowing down as they process massive datasets that can easily be as large as or larger than the hard drive on an average desktop computer. Initially, a coarse resolution of the data – a preview image – displays; then high-resolution detail is added gradually from the files.

“If you copy a dataset that large from one hard drive to another it takes 20 minutes. We are not only copying it, we are reading and processing the data, which would normally take about the same amount of time,” Meyer says. “But if you want to display and render a 3D image that rotates, you only have about 100 milliseconds. You can’t wait 20 minutes.”

The low-resolution preview image, he says, provides a first orientation. Once the image rotates to the required position, the computer stops it and adds increasingly more data from the hard drive, refining it one step at a time until full resolution is achieved.

Octtree and wavelet decomposition techniques are routinely used individually in other applications, according to Meyer, but they provide a one-of-a-kind software backbone when they’re joined. “The combination of octtree and wavelet together is a unique technology which allows us to render large medical data sets in real time. We are very excited about this accomplishment,” he says.

Branching Out
The software is opening new doors in research as well as medicine. When Meyer introduced it recently to other biomedical engineering faculty, it attracted interest from several colleagues. Three UCI professors currently are employing it on HIPerWall for their research: one project examines collagen fibers in the cornea to determine their role in corneal deformities; a second studies the deposit of plaques in mouse blood vessels; and the third scrutinizes the relationship between bone density and the success of dental implants.

Because the technology displays three-dimensional images from stacked cross-sections, Meyer also sees potential applications in civil engineering. “If you have an earthquake model with different layers of soil stacked on top of each other, for us, it’s exactly the same as CT data,” he says.

And since the technology is completely scalable, researchers expect it could be adopted by hospitals, medical offices and other users in the near future.

“It is ready for commercialization, and we foresee a wide range of applications ranging from surgery rooms to command centers,” Meyer says.

The team, which includes graduate students Sebastian Thelen and Li-Chang Cheng, draws on the CGLX and other frameworks developed by researchers Falko Kuester and Kai Doerr at Calit2 UCSD, and Stephen Jenks and Sung-Jin Kim at Calit2 UCI. The group intends to refine the interface to make it more user-friendly, and to continue improving the software’s speed, visual quality, color schemes and adaptability.

Meyer, who first presented the approach three years ago at an IEEE conference, is delighted with his results on HIPerWall. “This is something we’ve been working on for several years now and we have finally accomplished our goal. We are very satisfied.”