Showing posts with label performance. Show all posts
Showing posts with label performance. Show all posts

Friday, March 30, 2012

Blog: Engineers Rebuild HTTP as a Faster Web Foundation

Engineers Rebuild HTTP as a Faster Web Foundation
CNet (03/30/12) Stephen Shankland

At the recent meeting of the Internet Engineering Task Force, the working group overseeing the Hypertext Transfer Protocol (HTTP) formally opened a discussion about how to make the technology faster. The discussion included Google's SPDY technology and Microsoft's HTTP Speed+Mobility technology. Google's system prefers a required encryption, while Microsoft's preference is for it to be optional. Despite this and other subtle differences, there are many similarities between the two systems. "There's a lot of overlap [because] there's a lot of agreement about what needs to be fixed," says Greenbytes' Julian Reschke. SPDY already is built into Google Chrome and Amazon Silk, and Firefox is planning on adopting it soon. In addition, Google, Amazon, and Twitter are using SPDY on their servers. "If we do choose SPDY as a starting point, that doesn't mean it won't change," says HTTP Working Group chairman Mark Nottingham. SPDY's technology is based on sending multiple streams of data over a single network connection. SPDY also can assign high or low priorities to Web page resources being requested from a server. One difference between the Google and Microsoft proposals is in syntax, but SPDY developers are flexible on the choice of compression technology, says SPDY co-creator Mike Belshe.

Wednesday, November 3, 2010

Blog: New Google Tool Makes Websites Twice as Fast

New Google Tool Makes Websites Twice as Fast
Technology Review (11/03/10) Erica Naone

Google has released mod_pagespeed, free software for Apache servers that could make many Web sites load twice as fast. Once installed, the software spontaneously determines way to optimize a Web site's performance. "We think making the whole Web faster is critical to Google's success," says Google's Richard Rabbat. The tool could be especially useful to small Web site operators and anyone that uses content management systems to operate their Web sites, since they often lack the technical savvy and time needed to make their own speed improvements to Web server software. During testing, mod_pagespeed was able to make some Web sites load three times faster, depending on how much optimization had already been done. The program builds on Google's existing Page Speed program, which measures the speed at which Web sites load and offers suggestions on how to make them load faster.

View Full Article

Wednesday, September 8, 2010

Blog: Escher-Like Internet Map Could Speed Online Traffic

Escher-Like Internet Map Could Speed Online Traffic
New Scientist (09/08/10) Jacob Aron

Researchers at the University of Barcelona have developed a map of the Internet that could help eliminate network glitches. Barcelona researcher Marian Boguna fit the entire Internet into a disc using hyperbolic geometry. Each square on the map is a section of the Internet managed by a single body, such as a national government or a service provider. The most well-connected systems are close to the middle, while the least connected are at the edges. The researchers say the new map could provide coordinates for every system on the Internet, which could speed up routing traffic. Although the map shows just the number of connections between each autonomous system, the geography of the hyperbolic Internet map often reflects that of the real world. For example, a number of western European nations are clustered in one sector. The researchers also used simulations to demonstrate that a map of the Internet based on actual geographic relationships between systems trapped much more traffic within the network than the hyperbolic map.

View Full Article

Friday, August 20, 2010

Blog: W3C Launches Web Performance Working Group

W3C Launches Web Performance Working Group
Government Computer News (08/20/10) Mackie, Kurt

A new World Wide Web Consortium (W3C) working group will focus on improving the measurement of Web application performance times. Representatives from Microsoft and Google will co-chair the Web Performance Working Group, which will initially create a common application programming interface (API) for measuring Web page loading and Web app performance. The companies have been working independently on the problem, but will now pool their efforts. Microsoft has implemented W3C's Web Timing draft spec in its third platform review of Internet Explorer 9. Google has implemented the Web Timing spec into the WebKit rendering engine, which powers its Chrome browser, and says performance metrics are now accessible by developers for the Chrome 6 browser. The companies use vendor-specific prefixes for their implementations of the Web Timing spec. "With two early implementations available, it shouldn't take long to finalize an interoperable API and remove the vendor prefixes," says Microsoft's Jason Weber.

View Full Article

Thursday, August 20, 2009

Blog: Millionths of a Second Can Cost Millions of Dollars: A New Way to Track Network Delays

Millionths of a Second Can Cost Millions of Dollars: A New Way to Track Network Delays
University of California, San Diego (08/20/09) Kane, Daniel

Researchers at the University of California, San Diego and Purdue University have developed the Lossy Difference Aggregator, a method for diagnosing delays in data center networks in as little as tens of millionths of seconds. Delays in data center networks can result in multimillion dollar losses for investment banks that run automatic stock-trading systems, and similar delays can hold up parallel processing in high performance cluster computing applications. The Lossy Difference Aggregator can diagnose fine-grained delays in as little as tens of microseconds, and packet loss as infrequent as one in a million at every router within a data center network. The researchers say their method could be used to modernize router designs with almost no cost in terms of router hardware and no performance penalty. The performance of the routers within the data centers that run automated stock-trading systems are large and difficult to monitor. Delays in these routers, called latencies, are what can add microseconds to a network's time and potentially cost millions of dollars. The traditional way of measuring latency is to track when a packet arrives and leaves a router. However, instead of tracking every packet, the new system randomly splits incoming packets into groups and adds up arrival and departure times for each group. As long as the number of losses is smaller than the number of groups, at least one group will provide an accurate estimate. Subtracting the two sums, from groups that have no loss, and dividing by the number of messages, provides the estimated average delay. By implementing this system on every router, a data center manager could quickly identify slow routers. The research was presented at the recent ACM SIGCOMM 2009 conference. Purdue University professor Ramana Kompella says the next step will be to build the hardware implementation.

Friday, March 20, 2009

Blog: Multicore Chips Pose Next Big Challenge for Industry

Multicore Chips Pose Next Big Challenge for Industry
IDG News Service (03/20/09) Shah, Agam

Increasing the number of processing cores has become the main way of improving the performance of server and PC chips, but any added benefits will be significantly reduced if the industry is unable to overcome hardware and programming challenges, according to participants at the recent Multicore Expo. Most modern software is written for single-core chips and will need to be rewritten or updated to capitalize on the increasing number of cores that chip manufacturers are adding to their products, says analyst Linley Gwennap. Off-the-shelf applications can run faster on central processing units with up to four processor cores, but beyond that performance levels stall, and may even decrease as additional cores are added, Gwennap says. Chip manufacturers and system builders are working to educate software developers and provide them with better tools for multicore programming. Intel and Microsoft have provided $20 million to open two research centers at U.S. universities dedicated to multicore programming. Gwennap says the lack of multicore programming tools for mainstream developers may be the industry's biggest obstacle. Nevertheless, some software vendors are developing parallel code for simple tasks, such as image and video processing, Gwennap says. For example, Adobe has rewritten Photoshop so the program can assign duties to specific x86 cores, improving performance three- to four-fold, he says.

View Full Article

Monday, February 9, 2009

Blog: How to make your website really, really fast

How to make your website really, really fast

Posted by Andrew Mager; February 9th, 2009 @ 2:48 pm

Google's Steve Souders, who knows how to make a website speed through a browser, shares 14 tips for improving the efficiency and response time of any site. The best part: None of these techniques are that hard to implement.

READ FULL STORY

Tuesday, January 13, 2009

Blog: More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows

More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows
Sandia National Laboratories (01/13/09) Singer, Neal

Simulations at Sandia National Laboratory have shown that increasing the number of processor cores on individual chips may actually worsen the performance of many complex applications. The Sandia researchers simulated key algorithms for deriving knowledge from large data sets, which revealed a significant increase in speed when switching from two to four multicores, an insignificant increase from four to eight multicores, and a decrease in speed when using more than eight multicores. The researchers found that 16 multicores were barely able to perform as well as two multicores, and using more than 16 multicores caused a sharp decline as additional cores were added. The drop in performance is caused by a lack of memory bandwidth and a contention between processors over the memory bus available to each processor. The lack of immediate access to individualized memory caches slows the process down once the number of cores exceeds eight, according to the simulation of high-performance computing by Sandia researchers Richard Murphy, Arun Rodrigues, and Megan Vance. "The bottleneck now is getting the data off the chip to or from memory or the network," Rodrigues says. The challenge of boosting chip performance while limiting power consumption and excessive heat continues to vex researchers. Sandia and Oak Ridge National Laboratory researchers are attempting to solve the problem using message-passage programs. Their joint effort, the Institute for Advanced Architectures, is working toward exaflop computing and may help solve the multichip problem.

View Full Article

Blog Archive