|
By Tim Curns, Assistant Editor, HPCwire
12.01.03 --
HPCwire: What are your impressions of SC2003?
Smarr: I was here in Supercomputing '00 and most years before then, but I hadn't been to SC01 or 02. I'm very impressed with the scale of the exhibit and the balance that has come to the conference. They've clearly taken high performance grids very seriously. This is the conference to go to if you want to be able to find out about high performance computing, high performance networking, high performance storage, and high performance visualization -- and the integration of all of that. So I'm very pleased to see that. It's very frustrating to have to go to "component" conferences, because you only get a piece of the story there. But the Supercomputing conference is taking a systems integration approach so we can begin to understand how to modify each of the components so that they become optimized as a part of the whole, rather than the best part unto themselves. As a result, I think that the Supercomputing conference is going to have a major impact on industry directions.
HPCwire: How has your life changed since moving to Southern California?
Smarr: Since I resigned in February 2000 from NCSA and went out to UC San Diego in August of 2000, I've been going down a very different path than I was on directing a large, federally funded supercomputer center. But oddly enough, I've ended up back at the Supercomputing conference after my journey into the wilderness. People this week have been asking me how that happened. What I realized after I left NCSA was that if you look back over the history of the NSF supercomputer centers, you see that they not only made great contributions to computational science, but that also historically their impact on the entire Internet information infrastructure was quite profound. The linking together of the five NSF supercomputers in 1985 to create the NSF backbone, adopting the ARPAnet technology of TCP/IP, led directly to the creation of today's global Internet. The growth of the Web was greatly accelerated by NCSA Mosaic and the CAVE emerged out of the partnership between the Electronic Visualization Lab and NCSA.
So I thought, why don't we abstract that principle and look at the future of the Internet itself? That's what I decided would be the theme of the new institute that I'm directing, the California Institute for Telecommunications and Information Technology [Calit²)]. It is one of four California Institutes for Science and Innovation with the mission of enabling interdisciplinary collaboration. For instance, my institute is between UCSD and UC Irvine, but it has also partnered with the Information Sciences Institute at USC, with SDSU, and then through federal grants, it's partnered with the Electronic Visualization Lab at the University of Illinois at Chicago, and Joe Mambretti's Lab at Northwestern. We have about 100 to 200 faculty that come together from all the various components of research that would be necessary to understand how the Internet moves forward technologically, but also the application areas that will be transformed by this new Internet.
Two major areas we see rapidly developing in the Internet are: 1) the all-optical core of the Internet -- looking at wavelength division multiplexing and how very high speed, multi-gigabit, dedicated optical links can couple to Linux clusters in ways that give us very high performance for interacting with large data objects at a distance. That's what led to the NSF-funded OptIPuter project, which is anchored between Calit² in Southern California and the University of Illinois in Chicago; and 2) the movement of the Internet throughout the physical world via wireless technology. Partnered with Calit² is the Center for Wireless Communications at UCSD which has about 20 faculty looking deeply into the future of wireless. And [UCSD Jacobs School of Engineering computer science and engineering professor] Andrew Chien is developing a new Center for Networked Systems that is also coupled with Calit2. So it brings together faculty, staff, and students who are studying novel devices and new materials all the way up to virtual reality and applications such as Mark Ellisman's Biomedical Informatics Research Network (BIRN).
So structurally, the supercomputer centers are set up by a large federal grant. The federal core grant is the central funding mechanisms and then around that you aggregate some state funding and some industrial funding. Here, the core was a State of California capital grant for two new buildings to house Calit²,' to which we had to get a 2:1 match from industry and from federal grants. That involved many faculty members working on joint proposals, working in teams with industry and so forth. So we do get federal funding, but it's more the individual faculty members who get the federal funding. Our institute helps facilitate the formation of these interdisciplinary teams to go after what the government wants to see -- not just single investigator grants, but these larger scale things as well. BIRN at the NIH would be a good example of that team approach.
|
HPCwire: What was the most impressive thing you've seen at SC2003?
Smarr: I think, without question, the most impressive thing I've seen was Phil Papadopoulos' demo with Sun Microsystems. Their goal was to start with a set of 128 PC nodes and to construct, on the show floor, a 128-node Linux cluster, using ROCKS software to integrate it together, and then to be running applications in less than two hours. In fact, it only took one hour and fifteen minutes. That blew my mind! I think that it demonstrates that we have another phase coming in bringing HPC and the Grid to individual research labs in all of our universities. Right now, it's true that if you go into almost any campus and walk into computational chemistry labs, computational engineering labs, computational astronomy labs, you'll find that they all have Linux clusters -- but they're kind of thrown together by chemists and physicists, rather than computer scientists. We really don't have time for that; we've got to get on with doing the science. I think that the next step will be the standards-based recipe for scalable compute, storage, or visualization Linux clusters utilizing standards like Rocks This will mean that the end user can, literally in hours, add more compute, storage, or visualization power to their laboratory. I sort of thought of this as an abstract goal before, but Papadopoulos' demo with Sun made me a believer.
So my institute will be working very closely with Phil and SDSC on developing this recipe approach, prototyping it in support of the BIRN project for instance. Ultimately, the Grid isn't going to be used unless we have a "receptor" for the grid in each of our research campus laboratories. That means we've got to have multi-terabytes of storage, teraflops of computing power, and tens of millions of pixels of visualization display space driven by graphics clusters. Until we get that, you can't see the Grid in take-off mode. It's all about the end user. If the end user can't take advantage of these federated repositories from all these different scientific experiments in their laboratories, they won't use them. To me, I really saw the future here with that demonstration.
HPCwire: Speaking of the future, you did an interview with HPCwire's Alan Beck in 1998 where he asked you to describe your vision of HPC circa 2003 - - and circa 2023. How close did your predictions come? And how would you now change your visions for 2023? (http://www.dast.nlanr.net/Articles/981113SmarrInterview.html)
Smarr: Well, it seems as though I had it roughly right, but the Intel IA-64 processor hasn't come to dominate the world yet. There's much more of a battle for 64-bit space than I had anticipated. The other thing is that shared memory in a distributed computing environment has very little support right now. I would have expected more by now. Perhaps it's just because people have learned to make do with distributed memory. But I said, "researchers will analyze these simulations using Grid-coupled tele-immersive environments" and that's exactly what we're seeing here. I think I was right on with that. And this digital fabric that I talked about is in fact just what we're seeing as well. Also, I said that broadband wireless will become much more widespread, "where there's air, there's data." Well, at this conference, I have not just 802.11 wireless, but I've got cellular Internet wireless as well. So it's basically true.
HPCwire: So what do you now see happening in 2023?
Smarr: Well, as I said to Alan, the difficulty with projecting forward 20 years is that during those 20 years, we're going to see what I call "the perfect storm." This is the collision and interaction of three separate, exponential drivers. Prediction is possible if you just have, say Moore's Law, in one exponential component of the technology world. But what we are going to see is this interaction between biology (going through this exponential growth of understanding how to deal with individual molecules and their coding and communication systems), with the IT and telecom, such as wireless and optics. Then, that will all interact with the nanotechnology world. When you get below the hundred nanometers scale, biological molecules, nanotechnology, and information- carrying devices like viruses are all the same size. So why not just put them together? So, over the next 20 years, we're going to have a vastly higher premium put on interdisciplinary teams than we've seen before. This will occur because nobody can possibly be an expert in all of these areas simultaneously. Only a team pulled together with the best people in each of these areas, can possibly make the system integration impact that I expect to see from this "perfect storm." .
Copyright 1993-2003 HPCwire http://www.hpcwire.com/
To read HPCWire interview with OptIPuter software architect Andrew Chien, click here.
To read HPCWire interview with OptIPuter's Maxine Brown, click here.