UC San Diego Unveils World?s Highest-Resolution Scientific Display System

Calit2 Also Releases New Version of CGLX Cluster-Based Visualization Framework

San Diego, CA, July 9, 2008  -- As the size of complex scientific data sets grows exponentially, so does the need for scientists to explore the data visually and collaboratively in ultra-high resolution environments. To that end, the California Institute for Telecommunications and Information Technology (Calit2) has unveiled the highest-resolution display system for scientific visualization in the world at the University of California, San Diego.

HIPerSpace NASA Blue Marble
Visitors from the National Geographic Society get a first look at the 70-tile HIPerSpace wall at Calit2 on the UCSD campus.

The Highly Interactive Parallelized Display Space (HIPerSpace) features nearly 287 million pixels of screen resolution – more than one active pixel for every U.S. citizen, based on the 2000 Census.

The HIPerSpace is more than 10 percent bigger (in terms of pixels) than the second-largest display in the world, constructed recently at the NASA Ames Research Center . That 256-million-pixel system, known as the hyperwall-2, was developed by the NASA Advanced Supercomputing Division at Ames , with support from Colfax International.

The expanded display at Calit2 is 30 percent bigger than the first HIPerSpace wall at UCSD, built in 2006. That system was moved to a larger location in Atkinson Hall, the Calit2 building at UCSD, where it was expanded by 66 million pixels to take advantage of the new space. The system was used officially for the first time on June 16 to demonstrate applications for a delegation from the National Geographic Society.

HIPerSpace display wall at Calit2 UCSD
Some members of Kuester's lab at Calit2 in front of the HIPerSpace wall: (front l-r) Kai-Uwe Doerr, Falko Kuester, Daniel Knoblauch; (back l-r) So Yamaoka, Michael Olsen, former member Iman Sadeghi, Jason Kimball, Kevin Ponto.
“Amazingly it took our team less than a day to tear down the original wall, relocate and expand it,” said Falko Kuester, principal investigator of the HIPerSpace system. “The higher resolution display takes us more than half-way to our ultimate goal of building a half-billion-pixel tiled display system to give researchers an unprecedented ability to look broadly at large data sets while also zooming in to the tiniest details.”

Kuester is the Calit2 Professor of Visualization and Virtual Reality, and associate professor in the Jacobs School of Engineering’s departments of Structural Engineering as well as Computer Science and Engineering. He also leads the Graphics, Visualization and Virtual Reality Lab (GRAVITY), which is developing the HIPerSpace technology.

Calit2’s expanded HIPerSpace is an ultra-scale visualization environment developed on a multi-tile paradigm. The system features 70 high-resolution Dell 30” displays, arranged in fourteen columns of five displays each. Each 'tile' has a resolution of 2,560 by 1,600 pixels – bringing the combined, visible resolution to 35,640 by 8,000 pixels, or more than 286.7 million pixels in all. “By using larger, high-resolution tiles, we also have minimized the amount of space taken up by the frames, or bezels, of each display,” said Kuester. “Bezels will eventually disappear, but until then, we can reduce their distraction by keeping the highest possible ratio of screen area to each tile’s bezel.” Including the pixels hidden behind the bevels of each display, which give the "French door" appearance, the effective total image size is 348 million pixels.

HIPerSpace wall at Calit2

             
               Highest-Resolution, Multi-Tile Displays
  
                             (all numbers in pixels)

       HIPerSpace:    286,720,000 (Calit2 UC San Diego) 
       hyperwall-2:     256,000,000 (NASA Ames)
       HIPerWall:       204,800,000 (Calit2 UC Irvine)
       Varrier:            124,800,000 (Calit2 UC San Diego)
       LambdaVision: 105,600,000 (UIC Electronic Viz Lab)
       OzIPortal:          81,920,000 (University of Melbourne)

At 31.8 feet wide and 7.5 feet tall (9.7m x 2.3m), the HIPerSpace is already being used by a wide range of research groups at UC San Diego, which want to be able to view their largest data sets while also drilling down to the smallest elements on the same screen. A team from the Center of Interdisciplinary Science for Art, Architecture and Archaeology (CISA3) went to Florence to laser-scan the main hall of the Palazzo Vecchio, and the center's researchers at Calit2 can now manipulate the computer model, depicting all 2.5 billion data points and explore the space in real time. Other scientists model the impact of seismic activity on structures, climate-change predictions, the structure of the human brain, to name a few such applications.

In order to run simulations and explore data interactively, the developers of the HIPerSpace have built into the environment a large computer and graphics processing cluster. The wall is powered by 18 Dell XPS 710/720 computers with Intel quad-core central processing units (CPUs) and dual nVIDIA FX5600 graphics processing units (GPUs). A head node and six streaming nodes complete the hardware pool for a total of 100 processor cores and 38 GPUs. Thus the HIPerSpace system offers roughly 20 teraflops of peak processing power and 10 terabytes of storage, but its access to computing and storage capacity increases dramatically because the wall is an integral part of the National Science Foundation-funded OptIPuter infrastructure on, and beyond, the UCSD campus, including the so-called "OptIPortal" tiled display systems (some with as few as four tiles) that are the primary end-point for scientists using the infrastructure.

“The HIPerSpace is the largest OptIPortal in the world,” said Calit2 Director Larry Smarr, a pioneer of supercomputing applications and principal investigator on the OptIPuter project. “The wall is connected by high-performance optical networking to the remote OptIPortals worldwide, as well as all of the compute and storage resources in the OptIPuter infrastructure, creating the basis for an OptIPlanet Collaboratory."

HIPerSpace at Calit2


HIPerSpace: By the Numbers

Number of tiles: 70
(fully supported in networked configuration)
Display resolution: 35,840 x 8,000 pixels, 286,720,000 pixels total
Number of display nodes: 18
Number of streaming nodes: 6
Control and development nodes: 3
Combined HIPerSpace-HIPerWall connectivity: 491,520,000 pixels in distributed configuration

Hardware
18 Dell XPS710 w/ nVIDIA Quadro FX5600s72 Dell 3007WFP-HC, 30” Displays
2 Dell 2004WFP, 24” Displays
6 Shuttle SG31G2
2 24-port SMC switches with 10Gb uplink

Operating System
ROCKS/Linux

Middleware
CGLX

“We have full access to the OptIPuter resources, which drastically increase the CPUs, GPUs and storage at our disposal,” added Kuester. “Nodes are interconnected via a dedicated gigabit subnet and tied into the OptIPuter fabric with a 10 Gigabits-per-second [Gbps] uplink.”

In addition to 10Gbps connectivity to resources at nine locations on the UCSD campus, including Calit2 and the San Diego Supercomputer Center (SDSC), the OptIPuter provides the HIPerSpace system with up to 2Gbps in dedicated fiber connectivity with its precursor HIPerWall at Calit2 on the UC Irvine campus (and its roughly 205 million pixels). As a result, scientists can gather simultaneously in front of the walls in San Diego and Irvine and explore, analyze and collaborate in unison while viewing real-time, rendered graphics of large data sets, video streams and telepresence videoconferencing across nearly half a billion pixels.

HIPerSpace is serving as a visual analytics research space with applications in Earth systems science, chemistry, astrophysics, medicine, forensics, art and archaeology, while enabling fundamental work in computer graphics, visualization, networking, data compression, streaming and human-computer interaction.

In particular, HIPerSpace is a research testbed for visualization frameworks needed for massive resolution digital wallpaper displays of the near future that will leverage bezel-free tiles and provide uninterrupted visual content.

Release of CGLX Version 1.2.1

The most notable of these frameworks is the Cross-Platform Cluster Graphics Library (CGLX), which introduces a new approach to high-performance hardware accelerated visualization on ultra-high-resolution display systems. It provides a cluster management framework, a development API as well as a selected set of cluster-ready applications. Coinciding with the launch of the expanded HIPerSpace system, Calit2 today announced the official release of CGLX version 1.2.1, available for downloading at http://vis.ucsd.edu/~cglx . "There is no reason why you need to start from scratch every time you want to program an application for a visualization cluster," said CGLX developer Kai-Uwe Doerr, project scientist in Kuester's lab. "CGLX was developed to enable everybody to write real-time graphics applications for visualization clusters. The framework takes care of networking, event handling, access to hardware-accelerated rendering, and some other things. Users can focus on writing their applications as if they were writing them for a single desktop.”

CGLX on HIPerSpace
"Spraying" CGLX onto the HIPerSpace wall
With the emergence of OptIPortal technology, ultra-high resolution multi-tile display environments are no longer limited to a few select research facilities with highly specialized research teams supporting them. As a result, an intuitive yet powerful development framework is needed that supports fundamental research while enabling experts as well as novice users to utilize these systems.  From a high-level view, CGLX creates a distributed, parallel graphics context and manages its state and events transparently – allowing the user to focus on content and context rather than how render nodes and displays are combined to show the final visual.  CGLX enables OpenGL programs, developed for a single workstation, to be executed on a large-scale tiled visualization grid with minimal or no changes to the original code. The distributed nature of the framework supports and encourages the development of programs to generate visual analytics infrastructures, which enable researchers to collaboratively view, interrogate, correlate and manipulate data in real time with visual resolutions well beyond a single workstation. Key features of the framework include:
  - Cross-platform, hardware-accelerated rendering (UNIX and Mac OSX support);
  - Synchronized, multilayer OpenGL context support;
  - Distributed event management; and
  - Scalable multi-display support.

Applications using CGLX include a real-time viewer for gigapixel images and image collections, video playback, video streaming, and visualization of multi-dimensional models. The CGLX framework is already used by nearly all 90 megapixel-plus OptIPortals worldwide, and it is available for Linux (Fedora, RedHat, Suse), Rocks Cluster Systems (bundled in the hiperroll), and Mac OSX (leopard, tiger for ppc and Intel). CGLX is so flexible that it can even be scaled down to run on a commodity laptop. "With CGLX," explained Falko Kuester, "researchers can finally focus on solving demanding visualization and data analysis challenges on next-generation visual analytics cyberinfrastructure."

Spitzer Survey
More than 800,000 frames from the Spitzer Space Telescope were stitched together to make this portrait of dust and stars radiating in the inner Milky Way. An application developed for the HIPerSpace wall allows Calit2 to display this and other large data sets locally while connecting to remote storage clusters.
Two researchers in Kuester’s lab – Kevin Ponto and So Yamaoka – are developing visual analytics techniques to display gigapixel imagery at interactive (real-time) speeds on ultra-high resolution displays, notably the HIPerSpace wall. In a forthcoming publication, Ponto and Yamaoka demonstrate an application they developed on top of CGLX for use on the HIPerSpace wall. It uses OptIPuter networking to connect to remote storage clusters hosting target data sets, including the Spitzer Space Telescope Survey (for which each image of the inner Milky Way is 24,752 by 13,520 pixels), and NASA’s Blue Marble visualizations of the Earth at monthly intervals (86.4 million x 43.2 million pixels).

“These ultra-scale visualization techniques load data adaptively and progressively from network attached storage, requiring only a small local memory footprint on each display node, while avoiding data replication,” explained graduate-student Ponto. “All data is effectively loaded on demand in accordance with the locally available display resources.” Added fellow Computer Science and Engineering Ph.D. student Yamaoka: “A render node driving a single four-megapixel display, for example, will only fetch the data needed to fill that display at any given point in time. If the viewing position is updated, the needed data is again fetched, on demand.” 

Related Links

GRAVITY Lab at UC San Diego 
HIPerSpace 
CGLX
NASA Spitzer Space Telescope Survey 
NASA Blue Marble 
Intel 
nVIDIA
Dell 

Media Contacts

Doug Ramsey, 858-822-5825, dramsey@ucsd.edu