Monday, February 28, 2011

Blog: Remapping Computer Circuitry to Avert Impending Bottlenecks

Remapping Computer Circuitry to Avert Impending Bottlenecks
New York Times (02/28/11) John Markoff

Hewlett-Packard (HP) researchers have proposed a fundamental redesign of the modern computer that combines memory and computing power to significantly limit energy consumption. The new computing paradigm would be based on memory chips called nanostores, which are hybrid, three-dimensional systems in which lower-level circuits are based on memristors. The nanostore chips have a multi-story design, with silicon-based computing circuits that use minimal energy. HP's Parthasarathy Ranganathan says that in about seven years nanostore chips could hold up to one trillion bits of memory and contain 128 processors. He says the key is their ability to move data using very little energy. "What's going to be the killer app 10 years from now?" Ranganathan asks. "It's fairly clear it's going to be about data; that's not rocket science. In the future every piece of storage on the planet will come with a built-in computer." Other technologies also are being developed to help computing become more energy efficient. Recently, researchers at Harvard University and Mitre Corporation developed nanoprocessor tiles made from germanium-silicon wires. IBM researchers have been studying phase-change memories based on the ability to use an electric current to switch a material from a crystalline to an amorphous state and vice versa.

View Full Article

Thursday, February 24, 2011

Blog: Automaton, Know Thyself: Robots Become Self-Aware

Automaton, Know Thyself: Robots Become Self-Aware
Scientific American (02/24/11) Charles Q. Choi

Cornell University's Hod Lipson is developing robots that can reflect on their own thoughts by equipping them with two controllers. One controller was rewarded for chasing dots of blue light while avoiding red dots, and the second controller modeled how the first behaved and how successful it was. This technique, known as metacognition, enabled the robot to adapt after about 10 physical experiments, as opposed to the thousands of experiments needed using traditional evolutionary robots. "This could lead to a way to identify dangerous situations, learning from them without having to physically go through them--that's something that's been missing in robotics," says University of Vermont's Josh Bongard. Lipson also is studying how robots can model what others are thinking by programming one robot to watch another randomly move toward a light. The observer developed the ability to predict the other's movements so well that it could lay a trap for it on the ground. "This research might also shed new light on the very difficult topic of our self-awareness from a new angle--how it works, why, and how it developed," Lipson says. One application for self-aware robots could be the maintenance of a bridge, with sensors constantly monitoring vibrations in the framework to develop a self-image of the bridge.

View Full Article

Tuesday, February 22, 2011

Blog: Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era

Toward Computers That Fit on a Pen Tip: New Technologies Usher in the Millimeter-Scale Computing Era
University of Michigan News Service (02/22/11) Nicole Casal Moor

University of Michigan researchers, led by professors Dennis Sylvester, David Blaauw, and David Wentzloff, recently presented papers at the International Solid-State Circuits Conference in which they discussed a prototype implantable eye pressure monitor for glaucoma patients and a compact radio that does not need to be tuned to find a signal and could be used to track pollution, monitor structural integrity, or perform surveillance. The research utilizes millimeter-scale technologies to create devices for use in ubiquitous computing environments. The glaucoma eye pressure monitor is slightly larger than one cubic millimeter and contains an ultra-low-power microprocessor, a pressure sensor, memory, a thin-film battery, a solar cell, and a wireless radio transmitter that sends data to an external reading device. "This is the first true millimeter-scale complete computing system," Sylvester says. Wentzloff and doctoral student Kuo-Ken Huang have developed a tiny radio with an on-chip antenna that can keep its own time and serve as its own reference, which enables the system to precisely communicate with other devices. "By designing a circuit to monitor the signal on the antenna and measure how close it is to the antenna's natural resonance, we can lock the transmitted signal to the antenna's resonant frequency," Wentzloff says.

View Full Article

Thursday, February 17, 2011

Blog: Babies process language in a grownup way

Babies process language in a grownup way according to new research by Dean Jeff Elman and UC San Diego Health Sciences colleagues.

Blog: Computer Wins on Jeopardy!: Trivial, It's Not

Computer Wins on Jeopardy!: Trivial, It's Not
New York Times (02/17/11) John Markoff

IBM's Watson supercomputer concluded its third and final televised round of Jeopardy! on Wednesday in triumph, defeating the human players and winning the three-day tournament. Watson finished the three rounds with $77,147, while the two other contestants won $24,000 and $21,600. Watson proved very proficient at buzzing in quickly to answer questions--a reflection of its confidence in its answers--and its victory was a vindication for computer science and the notion of developing a thinking machine. The supercomputer excels at parsing language. For example, it responded to "A recent best seller by Muriel Barbery is called 'This of the Hedgehog,'" with "What is Elegance?" IBM plans to announce a collaborative venture with Columbia University and the University of Maryland to develop a doctor's assistant service based on the Watson technology, which will permit physicians to ask questions of a cybernetic assistant. Another collaboration with Nuance Communications will strive to add voice recognition to the assistant, possibly making the service available in as soon as 18 months. IBM executives also are discussing the development of a version of Watson that can engage with consumers on various topics such as buying decisions and technical support.

View Full Article

Tuesday, February 15, 2011

Blog: Rivest Unlocks Cryptography's Past, Looks Toward Future

Rivest Unlocks Cryptography's Past, Looks Toward Future
MIT News (02/15/11) David L. Chandler

Massachusetts Institute of Technology professor Ronald Rivest recently gave a speech discussing the history of the RSA cryptographic system, which is currently used to secure most financial transactions and communications over the Internet. The system, which Rivest helped develop with colleagues Adi Shamir and Len Adleman in 1977, relies on the fact that it is very hard to determine the prime factors of a large number. However, Rivest notes that it has not been shown mathematically that such factorization is necessarily difficult. "Factoring could turn out to be easy, maybe someone here will find the method," he says. If that happens, Rivest says several other current methods for secure encryption could be quickly adopted. He notes that RSA has led to spinoff technologies, such as the use of digital signatures to authenticate the identify of Web sites. Rivest says future cryptographic technologies could lead to applications in secure micropayment and voting systems. He believes the study of cryptography is fascinating because it unites a wide variety of disciplines. "It's like the Middle East of research, because everything goes through it," Rivest says.

View Full Article

Blog: A Fight to Win the Future: Computers vs. Humans

A Fight to Win the Future: Computers vs. Humans
New York Times (02/15/11) John Markoff

In 1963 Stanford University's John McCarthy launched the Stanford Artificial Intelligence Laboratory dedicated to artificial intelligence (AI) research, while Douglas Engelbart formed the Augmentation Research Center, which is focused on intelligence augmentation (IA). Ever since, the decades-long competition between AI and IA has resulted in new computing technologies that are transforming the world. IBM's Watson supercomputer is the most recent example of how far AI technology has come. Watson is designed to advance the techniques used to process human language. The progression of natural language processing is leading to a new age of automation that could transform areas of the economy that haven't been significantly influenced by technology. IBM executives hope to commercialize Watson for use in question-answering systems designed for business, education, and medicine. In the future, almost any job that involves answering questions or conducting transactions over the phone might be made obsolete by Watson-like systems. Intelligence augmentation technology also is greatly influencing human society, with Google being the most prominent example of how IA uses software to process the collective intelligence of humans and make the information available in a digital library.

View Full Article

Friday, February 11, 2011

Blog: The Cyberweapon That Could Take Down the Internet

The Cyberweapon That Could Take Down the Internet
New Scientist (02/11/11) Jacob Aron

University of Minnesota researchers have developed a cyberweapon that turns the structure of the Internet against itself, but ultimately could be used to make the Internet more secure. Minnesota's Max Schuchard and colleagues built on the ZMW attack, which disrupts the connection between two routers by interfering with the Border Gateway Protocol (BGP) to make it seem as if links are offline, spreading the disruption through the entire Internet. The method uses a large botnet to develop a map of the connections between computers, identify a common link, and launch a ZMW attack that can bring down the entire system. As the system routes traffic around the disrupted link, the attack would launch again, disrupting a different connection. Eventually, every router in the world would be receiving more updates than it could handle. "Once this attack got launched, it wouldn't be solved by technical means, but by network operators actually talking to each other," Schuchard says. However, the researchers predict that this type of attack would never be launched by malicious hackers because mapping the network is such a technically complex job, and the botnet needed would be so large that it is more likely to be rented out for a profit. Although simulations show that current BGP defenses cannot protect against this attack, a solution could be to send BGP updates via a different network.

View Full Article - May Require Free Registration

Thursday, February 10, 2011

Blog: Powerful New Ways to Electronically Mine Published Research May Lead to New Scientific Breakthroughs

Powerful New Ways to Electronically Mine Published Research May Lead to New Scientific Breakthroughs
University of Chicago (02/10/11) William Harms

University of Chicago researchers are exploring how metaknowledge can be used to better understand science's social context and the biases that can affect research findings. "The computational production and consumption of metaknowledge will allow researchers and policymakers to leverage more scientific knowledge--explicit, implicit, contextual--in their efforts to advance science," say Chicago researchers James Evans and Jacob Foster. Metaknowledge researchers are using natural language processing technologies, such as machine reading, information extraction, and automatic summarization, to find previously hidden meaning in data. For example, Google researchers used computational content analysis to uncover the emergence of influenza epidemics by tracking relevant Google searches, a process that was faster than methods used by public health officials. Metaknowledge also has led to the possibility of implicit assumptions that could form the foundation of scientific conclusions, known as ghost theories, even if scientists are unaware of them. Scientific ideas can become entrenched when studies continue to produce conclusions that have been previously established by well-known scholars, a trend that can be uncovered by using metaknowledge, according to the researchers.

View Full Article

Tuesday, February 8, 2011

Blog: Fresh Advice on Building Safer Software

Fresh Advice on Building Safer Software
Government Computer News (02/08/11) William Jackson

The Software Assurance Forum for Excellence in Code (SAFECode) recently released the second edition of "Fundamental Practices for Secure Software Development: A Guide to the Most Effective Secure Development Practices in Use Today," a set of guidelines based on real-world tools that reflects advancements in software security. "The second edition of the paper aims to disseminate the new knowledge SAFECode has gathered and provide new tools and improved guidance for those implementing the paper's recommended practices," says SAFECode executive director Paul Kurtz. The new edition contains more information on each best practice, using Common Weakness Enumeration (CWE) references to identify software weaknesses addressed by each specific practice. "By mapping our recommended practices to CWE, we wish to provide a more detailed illustration of the security issues these practices aim to resolve and a more precise starting point for interested parties to learn more," the paper says. The guidelines are designed to serve as a platform of practices, already employed by member companies, that have demonstrated efficacy.

View Full Article

Friday, February 4, 2011

Blog: Effective Search Terms Yield the Right Information

Effective Search Terms Yield the Right Information
University of Gothenburg (Sweden) (02/04/11)

Information retrieval is a multidisciplinary subject that needs greater contributions from linguists to improve the effectiveness of searches, says the University of Gothenburg's Karin Friberg Heppin. Much of the work in the field involves the development of search algorithms and engines, but Friberg Heppin says asking for information in the right way also can make a difference. She has written a doctoral thesis on natural language processing that shows the importance of looking at the terms people type into a search box. She used a database of medical texts written in Swedish to examine what makes search terms effective or ineffective. Heppin says the language used can determine the usefulness of the documents to a person, noting that the use of the word "flu" would result in documents that would be of interest to patients, while the word "influenza" would be a better choice for doctors. "Users usually know what kind of information they are looking for, but they don't know what question to ask," she says. "The problem these days is not for the search engine to locate the right documents, but to make the most relevant texts end up towards the top of the list."

View Full Article

Blog: DARPA Seeks Security Expertise From a Nontraditional Source: the Hacker Community

DARPA Seeks Security Expertise From a Nontraditional Source: the Hacker Community
NextGov.com (02/04/11) Dawn Lim

The U.S. Defense Advanced Research Projects Agency (DARPA) recently launched the Cyber Fast Track program, which will reward security research done quickly and inexpensively, criteria designed to attract nontraditional developers such as hobbyists, startups, and hackers. "Since the early '80s there has been some contingent of cyber researchers and hobbyists operating in low-budget settings," says DARPA's Peiter Zatko. He says limited resources force these groups to be extremely creative. DARPA also wants to apply the Cyber Fast Track process to other areas of defense. Zatko says the agency is looking toward unconventional solutions to cybersecurity problems because the current strategy of layering costly defensive security applications onto large IT infrastructures isn't sustainable. DARPA found that defensive applications contained about 10 million lines of code, while 9,000 samples of malware used only 125 lines of code. Although it is counterintuitive, more lines of code makes a system more vulnerable to attacks. An IBM metric suggests that for every 1,000 lines of code, there could be as many as five bugs introduced to the system. "You're spending all this effort layering on all this extra security, and it turns out that's introducing more vulnerabilities," Zatko says.

View Full Article

Blog Archive