Tuesday, August 31, 2010

Blog: Solving an Earth-Sized Jigsaw Puzzle

Solving an Earth-Sized Jigsaw Puzzle
University of Texas at Austin (08/31/10) Aaron Dubrow

Researchers at the California Institute of Technology and the University of Texas at Austin have developed geodynamics software that enables them to accurately simulate tectonic plate motion. The simulations were facilitated by the accessibility of petascale high performance computing systems that form part of the U.S. National Science Foundation's (NSF's) TeraGrid as well as other government facilities. Three years ago, NSF awarded the researchers a Petascale Applications grant to study the mantle simulation problem, and the effort yielded a suite of algorithms that tackle the challenges of the global mantle dynamics problem and can utilize computing resources at the highest scale. A central element of the software executes adaptive mesh refinement (AMR) that concentrates the simulation on the relevant parts of the model, with higher resolution and computing power channeled to these areas. The researchers say the AMR algorithms could be important tools for a broad spectrum of multiscale challenges, such as the simulation of Antarctic ice sheet dynamics. The AMR library has been embedded in the deal.II open source finite element code, which will allow scientists to employ parallel AMR methods in applications characterized by a wide range of time and length scales, such as problems in materials processing or astrophysics.

View Full Article

Thursday, August 26, 2010

Blog: New View of Tectonic Plates

New View of Tectonic Plates
California Institute of Technology (08/26/10) Svitil, Kathy

Researchers at the California Institute of Technology (Caltech) and the University of Texas at Austin (UTA) have developed algorithms that can simultaneously model the earth's mantle flow, analyze large-scale tectonic plate motions, and simulate the behavior of individual fault zones. The researchers say that together the algorithms produce an unprecedented view of plate tectonics and the forces that drive it. The research "illustrates the interplay between making important advances in science and pushing the envelope of computational science," says Caltech's Michael Gurnis. The researchers used a technique called Adaptive Mesh Refinement to create the algorithms. The algorithms "allow for adaptivity in a way that scales to the hundreds of thousands of processor cores of the largest supercomputers available today," says UTA's Carsten Burstedde. The algorithms helped the researchers simulate global mantle flow and how it manifests as plate tectonics and the motion of individual faults. The investigators found that anomalous rapid motion of microplates emerged from the simulations. Another surprising result of the research was that much of the energy dissipation occurs deep within the earth's interior.

View Full Article

Blog: MIT Builds Swimming, Oil-Eating Robots

MIT Builds Swimming, Oil-Eating Robots
Computerworld (08/26/10) Gaudin, Sharon

Massachusetts Institute of Technology (MIT) researchers have developed a robot using nanotechnology that can autonomously navigate across an ocean's surface and clean up an oil spill. The researchers say that a fleet of 5,000 robots, called a Seaswarm, could clean up a spill the size of the recent one in the Gulf of Mexico in about a month. "Unlike traditional skimmers, Seaswarm is based on a system of small, autonomous units that behave like a swarm and 'digest' the oil locally while working around the clock without human intervention," says MIT's Carlo Ratti. Seaswarm is designed to use a conveyor belt covered with a thin nanowire mesh that absorbs oil. The nanomesh repels water while absorbing 20 times its own weight in oil. A team of robots would use wireless communications and global positioning systems to move across the ocean without bunching up or leaving some parts uncleaned.

View Full Article

Tuesday, August 24, 2010

Blog: Sizing Samples

Sizing Samples
MIT News (08/24/10) Hardesty, Larry

Numerous scientific fields employ computers to deduce patterns in data, and Massachusetts Institute of Technology researchers led by graduate student Vincent Tan have taken an initial step to determine how much data is enough to support reliable pattern inference by envisioning data sets as graphs. In the researchers' work, the nodes of the graph represent data and the edges stand for correlations between them, and from this point of view a computer tasked with pattern recognition is provided a bunch of nodes and asked to construe the weights of the edges between them. The researchers have demonstrated that graphs configured like chains and stars establish, respectively, the best- and worst-case scenarios for computers charged with pattern recognition. Tan says that for tree-structured graphs with shapes other than stars or chains, the "strength of the connectivity between the variables matters." Carnegie Mellon University professor John Lafferty notes that a tree-structured approximation of data is more computationally efficient.

View Full Article

Friday, August 20, 2010

Blog: W3C Launches Web Performance Working Group

W3C Launches Web Performance Working Group
Government Computer News (08/20/10) Mackie, Kurt

A new World Wide Web Consortium (W3C) working group will focus on improving the measurement of Web application performance times. Representatives from Microsoft and Google will co-chair the Web Performance Working Group, which will initially create a common application programming interface (API) for measuring Web page loading and Web app performance. The companies have been working independently on the problem, but will now pool their efforts. Microsoft has implemented W3C's Web Timing draft spec in its third platform review of Internet Explorer 9. Google has implemented the Web Timing spec into the WebKit rendering engine, which powers its Chrome browser, and says performance metrics are now accessible by developers for the Chrome 6 browser. The companies use vendor-specific prefixes for their implementations of the Web Timing spec. "With two early implementations available, it shouldn't take long to finalize an interoperable API and remove the vendor prefixes," says Microsoft's Jason Weber.

View Full Article

Blog: The biggest winner in health reform

The biggest winner in health reform

By Dana Blankenhorn; Aug 20, 2010

The biggest winners out of health reform may come from an industry you never heard of.

Predictive modeling.

The idea has been around for 20 years. Compare the risks of patients, or the costs of clinics, in an effort to lower both. Collect the data, analyze it, then act on the analysis.

Verisk Health is in this business. Much of their work in the past was done with health insurance companies, and chief medical officer Nathan Gunn (right) admitted that in the old fee-for-service era it wasn’t working.

Tiering physicians into high-cost and low-cost networks did not work. Cutting off patients based on risk assessments proved to be political poison.

But capitation, the change from a fee-for-service to a fee-per-patient model, changes everything, he said. Capitation is the health reform the insurance industry wanted, and got, in the bill that passed Congress earlier this year.

http://www.smartplanet.com/technology/blog/rethinking-healthcare/the-biggest-winner-in-health-reform/1576/

Thursday, August 19, 2010

Blog: Research Paves the Way for Unselfish Computerized Agents

Research Paves the Way for Unselfish Computerized Agents
University of Southampton (ECS) (08/19/10) Lewis, Joyce

Adam Davies, a Ph.D. student at the University of Southampton's School of Electronics and Computer Science, has developed a computer simulation that could describe how selfish agents learn to cooperate in nature. Davies' adaptive system encourages agents to stop behaving selfishly and to choose actions that favor the common good instead. The approach is based on Hebbian learning, a learning process that occurs in the brain, to develop agents that behave as creatures of habit and can learn to make decisions that maximize global utility. "Our research looks at the effect of a strategy for increasing total utility in systems of selfish agents," Davies says. "With this strategy in place, selfish agents make decisions based on their learnt preferences rather than their true utility."

View Full Article

Blog: What Does 'P vs. NP' Mean for the Rest of Us?

What Does 'P vs. NP' Mean for the Rest of Us?
Technology Review (08/19/10) Pavlus, John

Although the current consensus is that Hewlett-Packard Labs research scientist Vinay Deolalikar's approach is fundamentally flawed, a verified P versus NP proof would conclusively determine what kinds of problems are and are not computer-solvable. NP-class problems in existence hold significant implications for pattern matching and optimization in a broad area of subjects. Such subjects include the arrangement of transistors on a silicon chip, the development of accurate financial prediction models, and analysis of cellular protein-folding behavior. The P versus NP problem queries whether these two classes are identical, which entails that every NP problem would feature a concealed shortcut permitting computers to rapidly come up with perfect solutions. Proving P is not equal to NP would validate widely accepted assumptions, such as the inability to efficiently factor massive composite numbers that form the foundation of modern cryptography. Still, "even if you proved that P does not equal NP ... it would have to radically expand our understanding of those capabilities, and make many new things possible with computers, in addition to all the clever workarounds we've already found," says Georgia Tech's Richard Lipton.

View Full Article

Wednesday, August 18, 2010

Blog: Researcher Cracks ReCAPTCHA

Researcher Cracks ReCAPTCHA
Dark Reading (08/18/10) Higgins, Kelly Jackson

Independent researcher Chad Houck recently released algorithms that can crack Google's reCAPTCHA program, even after the company's recent improvements to the security tool. Houck's method uses a combination of his own algorithms, including one that decodes the ribboning protections reCAPTCHA uses to mask the words from software, optical-character recognition, and a dictionary attack. Houck says the weakness of the reCAPTCHA program is in the way it is designed. "Every time someone types the verification word correctly, [the program] assumes they also typed the digitization word correctly," he says. Google strengthened the verification words in the program both before and after Houck's paper was published, according to a Google spokesperson. "We've found reCAPTCHA to be far more resilient while also striking a good balance with human usability, and we've received very positive feedback from customers," the spokesperson says. However, Houck says he has solved Google's latest tweaks and claims that "all of their security features are flawed."

View Full Article

Monday, August 16, 2010

Blog: Step 1: Post Elusive Proof. Step 2: Watch Fireworks

Step 1: Post Elusive Proof. Step 2: Watch Fireworks
New York Times (08/16/10) Markoff, John

A proposed proof of the P versus NP problem posted by Hewlett-Packard mathematician Vinay Deolalikar posits that P does not equal NP, which implies that there are problems which computers cannot solve yet which have easily recognizable solutions. This is a key underlying precept of modern cryptography. But the proof also is significant for inspiring Internet-based collaboration by complexity theorists and others for the purpose of discussing the proof. The theorists employed blogs and wikis to conduct real-time discussion and analysis of the proof, and this kind of collaboration has emerged only relatively recently in the math and computer science communities. New York University professor Clay Shirky contends in a recently published book, "Cognitive Surplus: Creativity and Generosity in a Connected Age," that the development of these new collaborative tools is fostering a second scientific revolution by ushering in a new paradigm of peer review. "It's not just, 'Hey, everybody, look at this,' but rather a new set of norms is emerging about what it means to do mathematics, assuming coordinated participation," he says.

View Full Article

Friday, August 13, 2010

Blog: Experts Warn of a Weak Link in the Security of Web Sites

Experts Warn of a Weak Link in the Security of Web Sites
New York Times (08/13/10) Helft, Miguel

Web sites that rely on certificate authorities to guarantee their authenticity are a growing security threat, experts say. As the number of third-party authorities has grown, it has become increasingly difficult to trust those who issue the certificates. "It is becoming one of the weaker links that we have to worry about," says the Electronic Frontier Foundation's (EEF's) Peter Eckersley. There are more than 650 organizations that can issue certificates that will be accepted by Internet Explorer or Firefox, according to the EEF. One of the weak links is Etisalat, a wireless carrier in the United Arab Emirates that was involved in a dispute with BlackBerry's maker, Research In Motion, over encryption. Etisalat could issue fake certificates to itself for scores of Web sites, and "use those certificates to conduct virtually undetectable surveillance and attacks against those sites," Eckersley says. Other researchers also are concerned about the proliferation of certificate authorities. "It is a bad enough problem that it should be receiving a lot more attention and we should be trying to fix it," says Princeton University's Stephen Schultze.

View Full Article

Thursday, August 12, 2010

Blog: Riders on a Swarm

Riders on a Swarm
Economist (08/12/10)

Free University of Brussels (FUB) researchers are developing artificial intelligence systems based on ant behavior. In 1992, FUB researcher Marco Dorigo and his team developed Ant Colony Optimization (ACO), an algorithm that analyzes problems by simulating a group of ants wandering over an area and laying down pheromones. ACO has since grown into a wide family of algorithms that have been applied to various applications. For example, European distributors use a program called AntRoute, which takes about 15 minutes to produce a delivery plan for 1,200 trucks. The researchers also have developed AntNet, a routing protocol in which packets of information jump from node to node, leaving a trace that signals the "quality" of their trip as they go. The particle swarm optimization (PSO) algorithm is used for continuous problems that have a potentially infinite number of solutions. There are now about 650 tested PSO applications, including those for image and video analysis, antenna design, and medical diagnostic systems. Dorigo is currently working on the Swarmanoid project, which aims to develop a swarm of cheap, small robots that cooperate using swarm intelligence.

View Full Article

Tuesday, August 10, 2010

Blog: Teraflop Troubles: The Power of Graphics Processing Units May Threaten the World's Password Security System

Teraflop Troubles: The Power of Graphics Processing Units May Threaten the World’s Password Security System
Georgia Tech Research Institute (08/10/10) Englehardt, Kirk J.; Toon, John

Georgia Tech Research Institute (GTRI) computer scientists are studying whether desktop computers with graphics processing units (GPUs) are so powerful that they compromise password protection. "Right now we can confidently say that a seven-character password is hopelessly inadequate--and as GPU power continues to go up every year, the threat will increase," says GTRI's Richard Boyd. Modern GPUs are so fast because they are designed as parallel computers. When given a problem, GPUs divide the task among multiple processing units and tackle different parts of the problem simultaneously. Software programs designed to break passwords are freely available on the Internet, and these programs, combined with the availability of GPUs, mean it is only a matter of time before the password threat will be immediate, the researchers say. GTRI's Joshua L. Davis says the best password is an entire sentence that includes numbers or symbols, because it is both long and complex and yet easy to remember.

View Full Article

Monday, August 9, 2010

Blog: In a Video Game, Tackling the Complexities of Protein Folding

In a Video Game, Tackling the Complexities of Protein Folding
New York Times (08/09/10) Markoff, John

Foldit is a free online game from University of Washington researchers in which thousands of volunteer gamers bested a computer program in determining how proteins fold into their three-dimensional shapes. The researchers say the success of the Foldit gamers reflects how nonscientists can collaborate to devise new algorithms and strategies that are distinct from conventional software solutions to the challenge of protein folding. Foldit starts with a series of tutorials in which the player controls protein-like structures on a computer display. As structures are tweaked in the game, a score is estimated based on how well the protein is folded. Gamers are provided with a set of controls that enable them to manipulate the backbone and the amino acid side shapes of a specific protein into a more efficient configuration. The researchers note that the Foldit players outperformed the software tools in areas that include pattern recognition.

View Full Article

Blog: HP Researcher Claims to Crack Compsci Complexity Conundrum

HP Researcher Claims to Crack Compsci Complexity Conundrum
IDG News Service (08/09/10) Jackson, Joab

Hewlett-Packard researcher Vinay Deolalikar claims to have solved the computer science problem widely known as P versus NP. In an email to a group of math professors, Deolalikar said he was announcing proof that polynomial time (P) is not equal to nondeterministic polynomial time (NP), which may mean certain problems can only be solved by brute force searching, if solutions can be found at all. Deolalikar said he pieced together principles from multiple areas within mathematics. "The major effort in constructing this proof was uncovering a chain of conceptual links between various fields and viewing them through a common lens," Deolalikar wrote. No one who is familiar with the problem has said Deolalikar has solved it thus far, considering the amount of checking that needs to be done on his solution. The Clay Mathematics Institute has promised to pay $1 million to the person who solves the problem. The P versus NP problem involves "determining whether questions exist whose answer can be quickly checked, but which require an impossibly long time to solve by any direct procedure," according to the institute.

View Full Article

Blog: New Paradigm for Scientific Publication and Peer Review

New Paradigm for Scientific Publication and Peer Review
ICT Results (08/09/10)

A European research project aims to replace scientific papers and peer reviews with a process inspired by social networking. The LiquidPublication project seeks to revolutionize how scientists share their work and evaluate contributions from others. The current scientific publication paradigm leads to wasted time, a heavy load for peer reviewers, and too many papers that recycle already published research or dribble out results a bit at a time, says project leader Fabio Casati. The researchers are developing a new way to share scientific knowledge, which they call liquid publication. The method takes advantage of the Web's ability to speed communication, facilitate data storage, search and retrieve data, and foster communities of interest to replace traditional peer reviews and paper publications. "If we can make scientists' work even 10 percent more efficient, it will give a great benefit to the community," Casati says. He says liquid publication could reduce the number of multiple papers that just report incremental new results. Instead, the researchers want to see incremental changes clearly identified by versions. They also suggest replacing peer reviews with the assessment that is implicitly given by the relevant community while editing and reading liquid journals.

View Full Article

Friday, August 6, 2010

Blog: Virtual Walkers Lead the Way for Robots

Virtual Walkers Lead the Way for Robots
New Scientist (08/06/10) Campbell, MacGregor

Researchers are studying ways to use simulated physics and evolution to give robots and virtual characters more realistic gaits. Simulated evolution, a process developed by NaturalMotion, begins with a population of virtual skeletons controlled by a network of virtual nerves. Each skeleton has a slightly different network, affecting its ability to walk. Those that can walk furthest are declared "most fit" and are used to spawn the next generation, in which a subset of the nerves are slightly altered. Over several generations the skeletons automatically evolve into better walkers. Meanwhile, University of British Columbia researcher Michiel van de Panne and University of Toronto researcher Martin de Lasa have developed overarching controllers, instead of animating a character by controlling each joint independently. The controllers create rules that specify how the character should behave, and the individual joints move to obey them. In the researchers' model, once the path of the swinging foot is specified by the controller, the angles between various joints in the leg and hip are automatically calculated. The researchers want to apply their work for use in humanoid robots.

View Full Article

Thursday, August 5, 2010

Blog: Speech Recognition Systems Must Get Smarter, Professor Says

Speech Recognition Systems Must Get Smarter, Professor Says
IDG News Service (08/05/10) Jackson, Joab

Most modern computerized speech-recognition systems can understand what a human says up to 98 percent of the time, yet people still get frustrated using automated phone help-desk systems, says University of Rochester professor James Allen. He says the key to making speech-recognition systems less frustrating to use is giving them a deeper understanding of language and making them more interactive. Allen has been researching ways to make these systems more life-like in the way they interact with humans. The goal is to be able to "talk to a machine the same way we can talk to a person," he says. A program designed by Allen and his team, called Plow, can learn how to carry out common tasks on a computer. "This is a system that allows you to essentially use dialog to train your system how to do things for you," he says. Another program designed by Allen and his research team, called Cardiac, mimics the questions a nurse asks a patient with heart disease. The system determines what information was provided and what is still needed. However, Allen says better two-communications between users and computers is still needed.

View Full Article

Monday, August 2, 2010

Blog: Team Releases Tools for Secure Cloud Computing

Team Releases Tools for Secure Cloud Computing
UT Dallas News (08/02/10) Moore, David

University of Texas at Dallas (UTD) researchers have released software tools designed to facilitate cloud computing. "In order to use electricity, we do not maintain electricity generators at home, instead we get the electricity on demand from the grid when we need it," says UTD Cyber Security Research Center director Bhavani Thuraisingham. He says the cloud computing model works on a similar principle. Research shows that the biggest hurdle to broad adoption of cloud computing is concern about the security of sensitive data, so security has been one of the UTD team's focal points. "In building a cloud, we are using a number of open source tools, including Apache's Hadoop distributed file system, Google's Mapreduce, and the University of Cambridge's XEN Virtual Machine monitor," Thuraisingham says. He says the tools provide the infrastructure for security features. UTD's tools provide secure query processing capabilities and prevent unauthorized access to sensitive data. The system's framework consists of a network layer, an infrastructure layer, a storage layer, and a data layer.

View Full Article

Blog Archive