Thursday, March 31, 2011

Blog: Targeted Results; determining a mathemtical graph's maximal independent set

Targeted Results
MIT News (03/31/11) Larry Hardesty

Massachusetts Institute of Technology (MIT) and Tel Aviv University researchers recently met at the Innovations in Computer Science conference at Tsinghua University to present a mathematical framework for finding localized solutions to complex calculations. The researchers say the framework could be used to solve classic computer science problems involving mathematical abstractions known as graphs. Graphs can represent any type of data, but it is often useful to determine the graph's maximal independent set, which occurs when enough vertices have been deleted from the graph so that there are no edges left, meaning that none are connected to any other. Graphs also can have more than one maximal independent set. The researchers developed an algorithm to efficiently determine which vertices are and are not included in at least one of the graph's maximal independent sets. Although the research is theoretical, the problem of calculating independent sets cuts across a variety disciplines, including artificial intelligence, bioinformatics, and scheduling and networking. "There have been lots of related concepts that have been sort of floating around," but the MIT and Tel Aviv researchers "have formalized it in an interesting way and, I think, the correct way," says Sandia National Labs' Seshadhri Comandur.

View Full Article

Blog: New Tool Makes Programs More Efficient Without Sacrificing Safety Functions

New Tool Makes Programs More Efficient Without Sacrificing Safety Functions
NCSU News (03/31/11) Matt Shipman

North Carolina State University (NCSU) researchers have developed software that helps programs run more efficiently on multicore chips without sacrificing safety features. The tool will help programmers who might otherwise leave out safety features because of how much they slow down a program's functions, says NCSU professor James Tuck. "Leaving out those features can mean that you don't identify a problem as soon as you could or should, which can be important--particularly if it's a problem that puts your system at risk from attack," Tuck says. In the past, the safety features were embedded directly into a program's code and run through the same core, which slows the program down. The researchers' tool utilizes multicore chips by running the safety features on a separate core in the same chip, which enables the main program to run at close-to-normal operating speed. "Utilizing our software tool, we were able to incorporate safety metafunctions, while only slowing the program down by approximately 25 percent," Tuck says.

View Full Article

Wednesday, March 23, 2011

Blog: Fruit Flies Could Hold Key to Future Internet

Fruit Flies Could Hold Key to Future Internet
Internet Evolution (03/23/11) Michael Kassner

Developing truly effective distributed computing algorithms for parallel processors is a major challenge for computer scientists. Carnegie Mellon University (CMU) researchers are researching this problem by studying fruit flies. Fruit flies are very good at solving Maximal Independent Set problems, which identify a subset of computers that connect to every other node in the network and provide structure, says CMU professor Ziv Bar-Joseph. Fruit flies solve similar problems naturally because during their brain development a process called Sensory Organ Precursor selection occurs. Since the flies can solve the problem without possibly knowing how many neighbors each network node has, determining how they achieve this could lead to robust and efficient computational methods, Bar-Joseph says. Sensor networks also could benefit from the new algorithm, according to Bar-Joseph. He says that "our fruit fly-derived algorithm is more efficient than any known method," and could become the method of choice for sensor-network applications.

View Full Article

Tuesday, March 22, 2011

Blog: Move Over, Einstein: Machines Will Take It From Here

Move Over, Einstein: Machines Will Take It From Here
New Scientist (03/22/11) Justin Mullins

Cornell University Ph.D. student Michael Schmidt and professor Hod Lipson have developed a research technique that reverses the usual scientific method. The researchers first perform experiments and feed the results into a computer to discover the laws of nature, rather than starting with a hypothesis to test. The success of this method hinges on evolutionary computing, in which robots or computers are presented with a goal and then generate programs and combine the most promising ones until the objective is achieved. Through evolutionary computing, machines can execute tasks that they have not been programmed to do, and the technique is already being employed to address a host of problems ranging from aircraft design to creating train timetables. Schmidt and Lipson's evolutionary algorithm, Eureqa, has been automated and released for free online. One key to the algorithm's success is its ability to look for invariance in the equations it produces. Lipson thinks that Eureqa will enable the extraction of laws of nature from data at currently unheard of rates, and that this type of machine learning will become routine.

View Full Article - May Require Free Registration

Wednesday, March 9, 2011

Blog: Professor Gets Computing's 'Nobel'

Professor Gets Computing's 'Nobel'
Boston Globe (03/09/11) Calvin Hennick

Harvard University professor Leslie G. Valiant, an artificial intelligence pioneer, has been awarded ACM's 2010 A.M. Turing Award. Valiant's research was the basis for applications including email spam filters, speech recognition software, and IBM's Watson computer system. "This connection with the achievements of the previous winners, and of Turing himself, is more than anyone in my field can reasonably expect," Valiant says. Cornell University professor Jon Kleinberg praised Valiant's adventurous research and noted that machine learning, Valiant's specialty, is the foundation for tools that computers use to solve problems on their own. "He takes on questions that are fundamental, but very hard to attack, like how do intelligent agents learn? Or how does the brain compute?" Kleinberg says. Valiant's 1984 paper, "A Theory of the Learnable," was a landmark in the computer science field because it led to the processes that enable computers to decide which emails could be discarded and which Web search results are the most relevant, says Microsoft Research New England's Jennifer Chayes.

View Full Article

Blog: Researchers Show How a Car's Electronics Can Be Taken Over Remotely

Researchers Show How a Car's Electronics Can Be Taken Over Remotely
New York Times (03/09/11) John Markoff

Researchers at the University of California (UDSD), San Diego and University of Washington have shown that computer hackers could gain remote access to a car and take over the vehicle's basic functions including control of the engine. The hackers accessed the car through the vehicle's built-in cellular connections and Bluetooth wireless technology, enabling them to track the car's location, eavesdrop on the cabin, and steal vehicle data. "This report explores how hard it is to compromise a car's computers without having any direct physical access to the car," says UCSD professor Stefan Savage. Services such as General Motors' OnStar system, Toyota's Safety Connect, Lexus' Enform, Ford's Sync, BMW's Assist, and Mercedes Benz's Mbrace all use a cellular connection embedded in the vehicle to provide a variety of automated and call center support services to a driver. These cellular channels "can be accessed over arbitrary distance [due to the wide coverage of cellular data infrastructure] in a largely anonymous fashion, typically have relatively high bandwidth, are two-way channels [supporting interactive control and data exfiltration], and are individually addressable," Savage says.

View Full Article

Monday, March 7, 2011

Blog: Identifying 'Anonymous' Email Authors

Identifying 'Anonymous' Email Authors
Concordia University (Canada) (03/07/11) Chris Atack

Concordia University researchers led by professor Benjamin Fung have developed a technique to determine the authors of anonymous emails with a high degree of accuracy. Malicious anonymous emails can "transmit threats or child pornography, facilitate communications between criminals, or carry viruses," Fung says. Although authorities can use the Internet protocol address to locate the building where the email originated, they have no way to differentiate between several suspects. Fung's method uses speech recognition and data-mining techniques to identify an individual author. First the researchers identify the patterns found in emails written by the suspect, and then they filter out any patterns that also are found in the emails of other suspects, leaving only the patterns that are unique to the original suspect. "Using this method, we can even determine with a high degree of accuracy who wrote a given email, and infer the gender, nationality, and education level of the author," Fung says. Testing showed that the system can identify an individual author of an anonymous email with up to 90 percent accuracy.

View Full Article

Blog: Software Progress Beats Moore's Law

Software Progress Beats Moore's Law
New York Times (03/07/11) Steve Lohr

Computing performance improvements resulting from better algorithms are more common than improvements attributable to faster processors, contradicting the conventional wisdom that hardware outpaces software in terms of computing developments, concludes a White House-commissioned report. The White House advisory report cites research, including a study of progress over a 15-year span on a benchmark production-planning task, which found that the speed of completing the calculations improved by a factor of 43 million. "The ingenuity that computer scientists have put into algorithms have yielded performance improvements that make even the exponential gains of Moore's Law look trivial," says University of Washington professor Edward Lazowska. He says that this progress can be seen in the development of artificial intelligence fields such as language comprehension, speech recognition, and computer vision.

View Full Article

Saturday, March 5, 2011

Blog: Google Schools Its Algorithm

Google Schools Its Algorithm
New York Times (03/05/11) Steve Lohr

Researchers at Google and elsewhere are straddling the forefront of computer intelligence through their efforts to program machines to understand human language. Improvements to statistical algorithms are possible due to an ever-increasing corpus of language on the Web and the development of faster computers, but machines face a sizable challenge with parsing and categorizing language. Google's Web site-ranking algorithm has a heavy reliance on connecting search terms to noun phrases in a Web page, as well as a site's popularity and how frequently other sites link to it. Google recently announced that it was performing a major retool of its ranking formula to downgrade low-quality sites that are mainly set up to siphon traffic from Google's search engine. "As we improve the language understanding of the algorithm, all the cheap tricks that people do will be recognized as cheap tricks instead of tricks that work," says Google fellow Amit Singhal. Computer scientists say the future of search engines such as Google resides in leveraging innovations in machine learning and language processing to transform them into answering machines.

View Full Article

Friday, March 4, 2011

Blog: Armies of Expensive Lawyers, Replaced by Cheaper Software [the Enron Corpus]

Armies of Expensive Lawyers, Replaced by Cheaper Software
New York Times (03/04/11) John Markoff

The automation of high-level jobs is getting more frequent due to progress in computer science and linguistics. Recent advances in artificial intelligence have enabled software to inexpensively analyze documents in a fraction of the time that it used to take highly trained experts to complete the same work. Some programs can even extract relevant concepts, specific terms, and identify patterns in huge volumes of information. "We're at the beginning of a 10-year period where we're going to transition from computers that can't understand language to a point where computers can understand quite a bit about language," says Carnegie Mellon University's Tom Mitchell. The most basic linguistic approaches use specific search words to find and sort relevant documents, while more sophisticated programs filter documents through large word and phrase definition webs. For example, one company has developed software designed to visualize a chain of events and search for digital anomalies. Meanwhile, another company has developed software that uses language analysis to study documents and find concepts instead of key words. These tools and others are based on Enron Corpus, an email database that includes more than five million messages from the Enron prosecution and was made public for scientific and technological research by University of Massachusetts, Amherst computer scientist Andrew McCallum.

View Full Article

Thursday, March 3, 2011

Blog: Memristor Processor Solves Mazes

Memristor Processor Solves Mazes
Technology Review (03/03/11)

Yuriy Pershin at the University of South Carolina and Massimiliano Di Ventra at the University of California, San Diego have developed a memristor processor that solves mazes. The researchers connect a voltage across the start and finish of the graphical puzzle and wait. "The current flows only along those memristors that connect the entrance and exit points," say Pershin and Di Ventra. The state of memristors change, enabling them to be easily identified--and a chain of memristors is a potential solution that would be much quicker than maze-solving strategies, which effectively work in series. "The maze is solved in a massively parallel way, since all memristors in the network participate simultaneously in the calculation," Pershin and Di Ventra say. They have conducted a test with a memristor simulator and believe that implementing the device in silicon will become easier over time. With the new approach, the network structure and layout take part in calculating, not just the memristors.

View Full Article

Blog Archive