Showing posts with label optimization. Show all posts
Showing posts with label optimization. Show all posts

Friday, March 30, 2012

Blog: Engineers Rebuild HTTP as a Faster Web Foundation

Engineers Rebuild HTTP as a Faster Web Foundation
CNet (03/30/12) Stephen Shankland

At the recent meeting of the Internet Engineering Task Force, the working group overseeing the Hypertext Transfer Protocol (HTTP) formally opened a discussion about how to make the technology faster. The discussion included Google's SPDY technology and Microsoft's HTTP Speed+Mobility technology. Google's system prefers a required encryption, while Microsoft's preference is for it to be optional. Despite this and other subtle differences, there are many similarities between the two systems. "There's a lot of overlap [because] there's a lot of agreement about what needs to be fixed," says Greenbytes' Julian Reschke. SPDY already is built into Google Chrome and Amazon Silk, and Firefox is planning on adopting it soon. In addition, Google, Amazon, and Twitter are using SPDY on their servers. "If we do choose SPDY as a starting point, that doesn't mean it won't change," says HTTP Working Group chairman Mark Nottingham. SPDY's technology is based on sending multiple streams of data over a single network connection. SPDY also can assign high or low priorities to Web page resources being requested from a server. One difference between the Google and Microsoft proposals is in syntax, but SPDY developers are flexible on the choice of compression technology, says SPDY co-creator Mike Belshe.

Monday, January 31, 2011

Blog: Testable System Administration


Testable System Administration
Download PDF version of this article

by Mark Burgess | January 31, 2011
Topic: System Administration
    Testable System Administration

Models of indeterminism are changing IT management.

MARK BURGESS


The methods of system administration have changed little in the past 20 years. While core IT 
technologies have improved in a multitude of ways, for many if not most organizations system 
administration is still based on production-line build logistics (aka provisioning) and reactive 
incident handling—an industrial-age method using brute-force mechanization to amplify a manual 
process. ... 
    ... Experienced system practitioners know deep down that they cannot think of system 
administration as a simple process of reversible transactions to be administered by hand; yet it is 
easy to see how the belief stems from classical teachings. ... At least half of computer science stems 
from the culture of discrete modeling ...To put it quaintly, “systems” are raised in 
laboratory captivity under ideal conditions, and released into a wild of diverse and challenging 
circumstances. Today, system administration still assumes, for the most part, that the world is simple 
and deterministic, but that could not be further from the truth. 
    ... In a test-driven approach, system state is regulated by continual reappraisal at a microscopic level, like having a groundskeeper watch continuously over an estate, plucking the weeds or applying a lick 
of paint where needed. Such an approach required the conceptual leap to a computable notion of 
maintenance. Maintenance can be defined by referring to a policy or model for an ideal system state. 
If such a model could somehow be described in terms of predictable, actionable repairs, in spite of 
environmental indeterminism, then automating maintenance would become a simple reality. ...
    The term compliance is often used today for correctness of state with respect to a model. If a system 
deviates from its model, then with proper automation it self-repairs, 2,4  somewhat like an autopilot 
that brings systems back on course. ... [T]hink about IT... in terms of goals rather than “building projects” ...
View Full Article

Monday, August 3, 2009

Blog: NCSA Researchers Receive Patent for System that Finds Holes in Knowledge Bases

NCSA Researchers Receive Patent for System that Finds Holes in Knowledge Bases
University of Illinois at Urbana-Champaign (08/03/09) Dixon, Vince

Researchers at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign, have received a patent for a method of determining the completeness of a knowledge base by mapping the corpus and locating weak links and gaps between important concepts. NCSA research programmer Alan Craig and former NCSA staffer Kalev Leetaru were building databases using automatic Web crawling and needed a way of knowing when to stop adding to the collection. "So this is a method to sort of help figure that out and also direct that system to go looking for more specific pieces of information," says Craig. Using any collection of information, the technique graphs the data, analyzes conceptual distances within the graph, and identifies parts of the corpus that are missing important documents. The system then suggests what concepts may best fill those gaps, creating a link between two related concepts that might otherwise not have been found. Leetaru says this system helps users complete knowledge bases with information they are initially unaware of. Leetaru says the applications for this method are limitless, as the corpus does not have to be computer-based and the method can be applied to any situation involving a collection of data that users are not sure is complete.

View Full Article

Thursday, January 8, 2009

Blog: Billion-Point Computing for Computers

Billion-Point Computing for Computers
UC Davis News and Information (01/08/09) Greensfelder, Liese

Researchers at the University of California, Davis (UC Davis) and Lawrence Livermore National Laboratory have developed an algorithm that will enable scientists to extract features and patterns from extremely large data sets. The algorithm has already been used to analyze and create images of flame surfaces, search for clusters and voids in a virtual universe experiment, and identify and track pockets of fluid in a simulated mixing of two fluids, which generated more than a billion data points on a three-dimensional grid. "What we've developed is a workable system of handling any data in any dimension," says UC Davis computer scientist Attila Gyulassy, who led the five-year development effort. "We expect this algorithm will become an integral part of a scientist's toolbox to answer questions about data." As scientific simulations have become increasingly complex, the data generated by these experiments has grown exponentially, making analyzing the data more challenging. One mathematical tool to extract and visualize useful features in data sets, called the Morse-Smale complex, has existed for nearly 40 years. The Morse-Smale complex partitions sets by similarity of features and encodes them into mathematical terms, but using it for practical applications is extremely difficult, Gyulassy says. The new algorithm divides data sets into parcels of cells and analyzes each parcel separately using the Morse-Smale complex. The results are then merged together, and as new parcels are created from merged parcels, they are analyzed and merged again. With each step, data that does not need to be stored in memory can be discarded, significantly reducing the computational power needed to run the calculations.

View Full Article

Monday, February 11, 2008

Web: Tool Predicts Election Results and Stock Prices

Web Tool Predicts Election Results and Stock Prices
New Scientist (02/11/08)No. 2642, P. 30; Palmer, Jason
Massachusetts Institute of Technology's Peter Gloor has developed Condor, software that monitors activity on the Web to predict the future of stock prices and election results. Condor has successfully predicted the results of an Italian political party's internal election as well as stock market fluctuations. Condor starts by taking a search term, such as the name of a political candidate or a company, and running it through a Google search. Condor then takes the URLs of the top 10 results and plugs them into the Google search field, prefaced with the term "link:". Google then returns the sites that link to the 10 original sites, which Condor then reinserts into Google. Condor then maps the links between all the sites it has found, even if they do not contain the original search term, and finds the shortest way to get from one site to the other through the links they contain. The more often a site is involved in moving between sites, the higher its "betweenness" score. Condor averages the betweenness scores for all of the sites to produce an overall score for the original search term. The score provides some indication of popularity. In December of 2006, Gloor entered a range of film titles from that year and found that of the 10 with the highest betweenness scores, five won Oscars, four were nominated, and only one did not receive an award. Gloor is working on altering Condor so that it only searches blogs or chat sites.
Click Here to View Full Article

Monday, January 7, 2008

Research: New Threshold for Network Use; Limited Path Percolation

New Threshold for Network Use
Government Computer News (01/07/08) Vol. 27, No. 1, Jackson, Joab
Traditional percolation theory holds that a network is considered functional as long as one workable path is available, but in a recent paper in Physical Review Letters researchers offered a new variant of percolation theory dubbed Limited Path Percolation that takes into account how long it would take a message to get to its destination. The longer it takes the less useful the path is, says study co-author Eduardo Lopez, a researcher at the Energy Department's Los Alamos National Laboratory. "If I'm routing something and it has to go a longer route, due to localized failures, then what are the limits of this?" Lopez says. The Limited Path Percolation variant considers all of the surviving nodes, as well as how much longer it would take to traverse them. The researchers argue that the network becomes less valuable the longer it takes, and suggest that the threshold of users is determined by how tolerant they are of delays. "The interesting point is not when the percolation threshold is reached, but rather when the network stops becoming efficient," says study co-author Roni Parshani, a graduate student at Israel's Bar-Ilan University.
Click Here to View Full Article

Monday, November 19, 2007

Research: Simplicity Sought for Complex Computing [Wolfram]

Simplicity Sought for Complex Computing
Chicago Tribune (11/19/07) Van, Jon
Stephen Wolfram says people building complex computers and writing complicated software may achieve more studying nature. Wolfram says his company is exploring the "computational universe" to find more simple solutions to complex problems that are currently handled by complex software. "Nature has a secret it uses to make this complicated stuff," Wolfram says. "Traditionally, we're not taking advantage of that secret. We create things that go around things nature is doing." Wolfram believes that nature has created a molecule that could be used as a computer if people ever manage to isolate and program the molecule. University of Chicago Department of Computer Science Chairman Stuart Kurtz says a lot of computer scientists are fascinated by finding simple systems capable of producing complex results. For example, a University of Southern California professor has proposed using recombinant DNA for computing. While DNA computers are largely theoretical, computer scientists take them quite seriously, Kurtz says. "People are used to the idea that making computers is hard," Wolfram says. "But we're saying you can make computers out of small numbers of components, with very simple rules."
Click Here to View Full Article

Thursday, November 1, 2007

Research: 'Suicide Nodes' Defend Networks From Within

'Suicide Nodes' Defend Networks From Within
New Scientist (11/01/07) Marks, Paul
University of Cambridge researchers have developed a computer defense system that mimics how bees sacrifice themselves for the greater good of the hive. The approach starts by giving all the devices on a network, or nodes, the ability to destroy themselves, and take down any nearby malevolent devices with them. The self-sacrifice provision provides a defense against malicious nodes attacking clean nodes. "Bee stingers are a relatively strong defense mechanism for protecting a hive, but whenever the bee stings, it dies," says University of Cambridge security engineer Tyler Moore. "Our suicide mechanism is similar in that it enables simple devices to protect a network by removing malicious devices--but at the cost of its own participation." The technique, called "suicide revocation," allows a single node to quickly decide if a nearby node's behavior is malevolent and to shut down the bad node, but at the cost of deactivating itself. The node also sends an encrypted message announcing that itself and the malevolent node have been shut down. The purpose of the suicide system is to protect networks as they become increasingly distributed and less centralized. Similar systems allow nodes to "blackball" malicious nodes by taking a collective vote before ostracizing the malicious node, but the process is slow and malicious nodes can outvote legitimate nodes. "Nodes must remove themselves in addition to cheating ones to make punishment expensive," says Moore. "Otherwise, bad nodes could remove many good nodes by falsely accusing them of misbehavior."
Click Here to View Full Article

Blog Archive