Thursday, January 8, 2009

Blog: Billion-Point Computing for Computers

Billion-Point Computing for Computers
UC Davis News and Information (01/08/09) Greensfelder, Liese

Researchers at the University of California, Davis (UC Davis) and Lawrence Livermore National Laboratory have developed an algorithm that will enable scientists to extract features and patterns from extremely large data sets. The algorithm has already been used to analyze and create images of flame surfaces, search for clusters and voids in a virtual universe experiment, and identify and track pockets of fluid in a simulated mixing of two fluids, which generated more than a billion data points on a three-dimensional grid. "What we've developed is a workable system of handling any data in any dimension," says UC Davis computer scientist Attila Gyulassy, who led the five-year development effort. "We expect this algorithm will become an integral part of a scientist's toolbox to answer questions about data." As scientific simulations have become increasingly complex, the data generated by these experiments has grown exponentially, making analyzing the data more challenging. One mathematical tool to extract and visualize useful features in data sets, called the Morse-Smale complex, has existed for nearly 40 years. The Morse-Smale complex partitions sets by similarity of features and encodes them into mathematical terms, but using it for practical applications is extremely difficult, Gyulassy says. The new algorithm divides data sets into parcels of cells and analyzes each parcel separately using the Morse-Smale complex. The results are then merged together, and as new parcels are created from merged parcels, they are analyzed and merged again. With each step, data that does not need to be stored in memory can be discarded, significantly reducing the computational power needed to run the calculations.

View Full Article

No comments:

Blog Archive