University of California, San Diego (08/20/09) Kane, Daniel
Researchers at the University of California, San Diego and Purdue University have developed the Lossy Difference Aggregator, a method for diagnosing delays in data center networks in as little as tens of millionths of seconds. Delays in data center networks can result in multimillion dollar losses for investment banks that run automatic stock-trading systems, and similar delays can hold up parallel processing in high performance cluster computing applications. The Lossy Difference Aggregator can diagnose fine-grained delays in as little as tens of microseconds, and packet loss as infrequent as one in a million at every router within a data center network. The researchers say their method could be used to modernize router designs with almost no cost in terms of router hardware and no performance penalty. The performance of the routers within the data centers that run automated stock-trading systems are large and difficult to monitor. Delays in these routers, called latencies, are what can add microseconds to a network's time and potentially cost millions of dollars. The traditional way of measuring latency is to track when a packet arrives and leaves a router. However, instead of tracking every packet, the new system randomly splits incoming packets into groups and adds up arrival and departure times for each group. As long as the number of losses is smaller than the number of groups, at least one group will provide an accurate estimate. Subtracting the two sums, from groups that have no loss, and dividing by the number of messages, provides the estimated average delay. By implementing this system on every router, a data center manager could quickly identify slow routers. The research was presented at the recent ACM SIGCOMM 2009 conference. Purdue University professor Ramana Kompella says the next step will be to build the hardware implementation.
No comments:
Post a Comment