Workshop Convenes Best Minds in Data Storage to Break Computing Bottlenecks

By Tiffany Fox, (858) 246-0353, tfox@ucsd.edu

San Diego, Calif., April 17, 2012 — To the uninitiated, UC San Diego’s annual “Non-Volatile Memories Workshop” sounds like some kind of group therapy session for those recovering from past emotional traumas.

Participants in NVM workshop
The third annual Non-Volatile Memories Workshop was a prime opportunity for more than 200 academics and industry representatives from an esoteric but influential field to present their research and stretch this continually evolving technology to the limit.

But for those in the technological know, non-volatile memories (NVM) are crucial components of modern computing systems, components that make it possible to store increasingly large amounts of information in smaller spaces, at faster data transfer speeds and (if the industry has its way) at lower cost to the consumer. 

All of this is contingent, however, on busting research and development bottlenecks that keep the latest and greatest advances in NVM from entering the marketplace. The third annual Non-Volatile Memories Workshop was a prime opportunity for more than 200 academics and industry representatives from this esoteric but influential field to present their research and stretch this continually evolving technology to the limit.

What is NVM, anyway?

NVM, in its most basic sense, is what makes it possible for you to turn off your computer (or unplug your flash drive) and have it still ‘remember’ the last draft of the novel you’ve been working on. By contrast, most computer memory consists of a volatile form of random-access memory (RAM). RAM is to blame when, the second your computer shuts down, it ‘forgets’ it had Photoshop or Google Chrome loaded, which means you have to load them up again when you reboot. 

What makes the newest forms of NVM especially appealing – in particular solid-state drives (SSD) like flash memories – is that they continue to improve in terms of speed. 

Paul Siegel is a professor of Electrical Computing and Engineering at UC San Diego who works on error-correction coding techniques (algorithms that modify the information before storing it in order to help detect and fix errors that might occur when the information is retrieved from the memory). He is also the outgoing director of the Center for Magnetic Recording Research, which sponsored the workshop.

Noted Siegel: “NVMs like magnetic tape and disk drives have been used since the late 1950s and have seen dramatic improvements in their capacity to store data, but they’ve fallen short with respect to speed, or how much time it takes to fetch a piece of information, pull it back into the processor and do something with it. 

“I think the excitement people have with regard to some of these newer NVMs offer is the kind of rapid access that leads to a rethinking of the whole computer system architecture.” 

Novel system architectures are the domain of Steve Swanson, professor of Computer Science and Engineering at UCSD and co-director, along with Siegel, of the Non-volatile Systems Laboratory (NVSL) in the UCSD Department of Computer Science and Engineering and the campus division of the California Institute for Telecommunications and Information Technology (Calit2). 

“These memory technologies,” said Swanson, “have the potential to revolutionize how computers store and access data, but we don’t fully understand how to make the best use of them. This workshop focuses attention on the remaining technical hurdles that stand in the way of realizing their potential. It gets everyone in the same room talking about these problems so we can address them quickly and efficiently.” 

The impact of NVM: Differences on display

Gian Mario Maggio
UC San Diego Professors Steve Swanson (at left) and Paul Siegel (center) co-organized the workshop. 

Exploring the possibilities of NVM — and calling attention to its limitations — was the focus of this year’s workshop, which was held last month and attended by representatives from UCSD, Princeton, Texas A&M, Georgia Tech, Israel’s Technion University and several other academic institutions, as well as from industry participants Intel, Microsoft Research, NEC Labs and Rambus. 

Industry sponsors Samsung, STEC, Western Digital, Fusion-io, HP, IBM, LSI, Marvell, and Microsoft provided financial support for the workshop, and the National Science Foundation provided funding that was used for more than 40 student travel grants. 

In addition to a technical tutorial and two keynote presentations, attendees also attended lectures grouped into four main categories: Applications, Devices, Architecture, and Error-Correction Coding. 

In his keynote presentation on day two of the three-day workshop, Intel Senior Fellow and Director of Storage Technologies Rick Coulson provided numerous examples of the impact NVM and particularly SSDs are having on computer systems -- most viscerally by way of a video demo that contrasted the speed and efficiency of an Intelx-25m SSD and a standard hard-disk drive (HDD). 

There was no denying that that the SSD was orders of magnitude faster. It loaded programs within seconds, whereas the HDD took so long it seemed practically antique. Part of the reason for this, Coulson explains, is that HDDs have not kept pace with advances in CPU (central processing unit) performance.

The CPU is analogous to the computer’s ‘brain’: it carries the instructions of a computer program and performs basic operations. Over the past 13 years, Coulson said, CPU performance has improved by a factor of 175, while IOPS (input/output operations per second) have only improved by a factor of 1.3 in HDDs. In other words, while our computers’ brains have gotten faster and more powerful, they can’t perform at their maximum capacity because their internal storage devices are like old, rusty filing cabinets that are cumbersome to open and sort through.

“What this means is that one part of the system turns into a bottleneck because it’s not improving fast enough,” Coulson continued. “System performance is dominated by performance in storage devices. Boot times and application load times are affected. Using multiple HDDs to overcome this gap is not economically or physically viable. You can see how this is a big problem.”

Cost: The buzzkill

So if SSDs are persistent and also faster than HDDs, why aren’t all of our computers equipped with them? 

In a word: Cost. Siegel explained that the most robust forms of NVM are prohibitively expensive for a number of reasons, including the need to recoup technology development costs, manufacturing costs, variability in fabricated chip quality, patent-related barriers to entry and market economics. That’s created another development bottleneck for scientists working on the vanguard of NVM research. 

Added Siegel: “A one-terabyte solid-state Flash drive, for example, might cost upwards of $3,000, whereas a one-terabyte hard-disk drive is well under $100. The decreasing ‘cost per gig’ (of storage) has been the parameter that has largely accounted for the success of hard disk drives, but where hard-disk drives fall short is in speed.”

NVMW poster session
The NVM Workshop also included a poster session. The National Science Foundation provided funding that was used for more than 40 student travel grants. 

And the reason for that, he said, is primarily a matter of how hard-disk drives are designed. 

“The way it works with a disk drive is the controller learns from system it has to get a certain file – say it’s a few kilobytes of data on track 52 in data sector X. The drive has to physically move an actuator arm with a motor that will position the head over the correct track and read until it gets to the point it needs. It’s the same process when writing data, except it replaces information on a certain track.”

With SSDs like Flash devices, on the other hand, information is stored in an array of NAND flash cells, which are just like transistors on a silicon chip.

“Everything is done purely electronically rather than mechanically,” noted Siegel. “The process simply uses an electronic mechanism for sensing the information that is stored in the array of cells. There’s no need for a rotating disk or a mechanical arm. One of the big advantages of Flash memory is that it doesn’t have to move anything physically. That saves you a lot of time in accessing your data, and you also don’t have to worry about a sudden bump causing a flood of errors or a catastrophic head crash.”

“Flash also doesn’t require as much energy to do its job,” he pointed out. “That actuator arm is energy hungry.”

Added Coulson: “If you don’t have an SSD, you don’t have a sense of the impact it makes. It even goes beyond performance. SSDs are more rugged, have more form-factor freedom, use less power. This results in a thinner, faster, lighter notebook, and even cell phones with SSDs. These things just aren’t possible with rotating storage.”

NVM: Not just fun and games

If SSDs ever do become ubiquitous among users of personal computers, the trend will likely begin with gamers. Coulson noted that computer games like “Assassin’s Creed” and “Black Light Retribution” are increasingly optimizing around SSDs. 

“It doesn’t make any sense to play a game faster than real-time,” he conceded, “but 

what you are trying to get is better video quality where you don’t have stutters or hitches.” When played on an HDD-equipped computer, for example, the speed a horse could gallop in “Assassin’s Creed” depended on the HDD’s processing speed, “which was so slow you could walk as fast as the horse could gallop,” remarked Coulson. But with SSD, the horse could gallop at the intended full speed. 

There’s also revenue potential in SSDs for game creators. Zombie Studios’ “Black Light Retribution,” for example, allows those playing with a SSD-equipped computer to purchase “speed legs,” which allow them to go 20 percent faster than HDD players.

“Today’s SSDs can support better realism than ever,” added Coulson. “Future storage technologies may have order of magnitude lower latency, which will allow the ability to build massive 3D streaming worlds where you can zoom as close or as far away as you want with a high level of detail. Artists will be able to build what they want without worrying about level size budgets. Fundamentally new games will be made possible.”

Yet it’s not just fun and games that would benefit from increased use of SSDs: These forms of NVM are also necessary for certain types of data-intensive computing and data visualizations, such as those performed on a routine basis by the supercomputer known as Gordon. Built and operated by the San Diego SuperComputer Center, Gordon is the world’s first supercomputer to use flash memory as its primary storage device. 

Taking NVM to the next level

All the glowing talk about NVM and SSDs is enough to make one conclude that they’re the panacea for every problem facing the computer industry. But Siegel was quick to point out another bottleneck holding researchers back: reliability and endurance. 

“You can think of the lifetime of SSDs as being a measure of how many times you can write new information to them,” he explained. “But if you want to change anything in the array, data has to be cleared out of the array, stored somewhere else, and then put back in with changes. This causes degradation, and because of this degradation, the error rate of SSDs increases.” 

A Flash ‘thumb’ drive, for example, can only support about 5,000 erase cycles, which is relatively low in terms of data storage. And when flash drives are integrated into high-volume, transaction-oriented enterprise storage systems, such as those used by the world’s leading technology companies, this lack of endurance poses a serious problem. 

Amit Berman
Amit Berman, a researcher at Technion University, said the NVM workshop helps him and his colleagues determine which new ideas in the field will eventually be the 'right' ones."

Siegel noted that researchers have developed ways to rewrite data without as much degradation (so-called ”write-once-memory” codes are one promising example), but their efforts to improve the overall reliability are being stymied by yet another hold-up: The industry’s tight proprietary control over the exact details of how the bit patterns of data are arranged within a device. 

“In effect,” said Siegel, “we’re often left with having to do reverse engineering in order to understand the physical characteristics of the device, how bits are mapped to voltage levels in a flash cell, and how the cells are organized on the chip. Details like this are vital if you want to design better error-correcting codes for NVMs.

“I think those in industry recognize it’s to their advantage to work with us, but this is a very competitive business,” added Siegel, who worked for IBM for 15 years before joining the faculty at UCSD. “No one wants to share their industry secrets, not even with academia. That’s why we get such good attendance at this workshop from industry. They’re interested in seeing what we’re doing at the universities and they hope to direct our research into areas that are quite pertinent. 

“And of course if you’re in academia and your objective is to do something that advances the latest technology, it’s nice to know what the industry people think are the biggest problems.”

But some at the workshop argued that industry itself – along with the political and economic variables that so often obstruct technological advances – are often the biggest bottlenecks of all. 

“Academia is usually 20 years ahead of industry, and sometimes more,” said Amit Berman, a researcher at Technion University. “Industry is at the mercy of investors and the willingness of the markets to adapt to new technologies, and that’s why industry tends to be more conservative. They’re more likely to integrate small improvements rather than revolutionary advances because they can avoid running into systems issues that other manufacturers have to adopt. Small improvements are also easier for the consumer to digest.

“We could design a processor that’s 100 times faster than the ones currently on the market,” he continued, “but no one would buy it their computers would also need faster memory and motherboards to exploit its capabilities. Likewise, you probably wouldn’t buy 10 terabytes of flash memory because you need a computer to be able to exploit it.

“To be adopted in the marketplace, technologies have to be in the right place, at the right time. What we learn at the workshop can help us come up with some of the new ideas that will eventually be the ‘right’ ones."

Media Contacts

 Tiffany Fox, (858) 246-0353, tfox@ucsd.edu

Related Links

Non-Volatile Memories Workshop 2012

Non-Volatile Systems Laboratory

Center for Magnetic Recording Research