Showing posts with label AI. Show all posts
Showing posts with label AI. Show all posts

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 2, 2012

Blog: UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence
University of Massachusetts Amherst (04/02/12) Janet Lathrop

University of Massachusetts Amherst researchers are translating the "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment the same way human brains do. The model "is a mathematical formulation of the brain’s neural networks with their adaptive abilities," says Amherst computer scientist Hava Siegelmann. When the model is installed in a new environment, the new Super-Turing model results in an exponentially greater set of behaviors than the classical computer or the original Turing model. The researchers say the new Super-Turing machine will be flexible, adaptable, and economical. "The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," Siegelmann says.

Thursday, March 15, 2012

Blog: ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence

ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence
PC Magazine (03/15/12) Michael J. Miller

ACM announced that University of California, Los Angeles professor Judea Pearl is this year's winner of the A.M. Turing Award for his work on artificial intelligence. The award, considered the highest honor in computer science, recognizes Pearl for devising a framework for reasoning with imperfect data that has changed the strategy for real-world problem solving. ACM executive director John White says Pearl was singled out for work that "was instrumental in moving machine-based reasoning from the rules-bound expert systems of the 1980s to a calculus that incorporates uncertainty and probabilistic models." Pearl worked out techniques for attempting to reach the best conclusion, even when there is a level of uncertainty. Internet pioneer Vinton Cerf says Pearl's research "is applicable to an extremely wide range of applications in which only partial information is available to draw upon to reach conclusions." He also says the successful business models of companies that search the Internet owe a debt to Pearl's work. Pearl generated the framework for Bayesian networks, which provides a compact method for representing probability distributions. This framework has played a substantial role in reshaping approaches to machine learning, which currently has a heavy reliance on probabilistic and statistical inference, and which underlies most recognition, fault diagnosis, and machine-translation systems.

Monday, March 12, 2012

Blog: Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter

Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter
UC Berkeley News Center (03/12/12) Yasmin Anwar

University of California, Berkeley researchers are studying how babies, toddlers, and preschoolers learn in order to program computers to think more like humans. The researchers say computational models based on the brainpower of young children could give a major boost to artificial intelligence research. "Children are the greatest learning machines in the universe," says Berkeley's Alison Gopnik. "Imagine if computers could learn as much and as quickly as they do." The researchers have found that children test hypotheses, detect statistical patterns, and form conclusions while constantly adapting to changes. “Young children are capable of solving problems that still pose a challenge for computers, such as learning languages and figuring out causal relationships,” says Berkeley's Tom Griffiths. The researchers say computers programmed with children's cognitive abilities could interact more intelligently and responsively with humans in applications such as computer tutoring programs and phone-answering robots. They are planning to launch a multidisciplinary center at the campus' Institute of Human Development to pursue their research. The researchers note that the exploratory and probabilistic reasoning demonstrated by young children could make computers smarter and more adaptable.

Friday, February 17, 2012

Blog: IBM Says Future Computers Will Be Constant Learners

IBM Says Future Computers Will Be Constant Learners
IDG News Service (02/17/12) Joab Jackson

Tomorrow's computers will constantly improve their understanding of the data they work with, which will help them provide users with more appropriate information, predicts IBM fellow David Ferrucci, who led the development of IBM's Watson artificial intelligence technology. Computers in the future "will not necessarily require us to sit down and explicitly program them, but through continuous interaction with humans they will start to understand the kind of data and the kind of computation we need," according to Ferrucci. He says the key to the Watson technology is that it queries both itself and its users for feedback on its answers. "As you use the system, it will follow up with you and ask you questions that will help improve its confidence of its answer," Ferrucci notes. IBM is now working with Columbia University researchers to adapt Watson so it can offer medical diagnosis and treatment. Watson could serve as a diagnostic assistant and offer treatment plans, says Columbia professor Herbert Chase. Watson also could find clinical trials for the patient to participate in. "Watson has bridged the information gap, and its potential for improving health care and reducing costs is immense," Chase says.

Tuesday, January 24, 2012

Blog: The Mathematics of Taste

The Mathematics of Taste
MIT News (01/24/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers used genetic programming, in which mathematical models compete with each other to fit the available data and then cross-pollinate to produce more accurate models, to analyze taste-test data. Swiss flavor company Givaudan asked researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) to help interpret the results of taste tests in which 69 subjects assessed 36 different combinations of seven basic flavors. For each subject, the researchers randomly generated a mathematical function that predicted scores according to the concentrations of different flavors. After all of the functions were assessed, the best ones were recombined to produce a new generation of functions, and the whole process was repeated about 30 times. To establish the model's accuracy, the CSAIL researchers developed another model to validate their approach. Taste preference "is a pretty brilliant area in which to apply the evolutionary methods--and it looks as though they're working, also, so that's exciting," says Hampshire College professor Lee Spector.

Monday, December 5, 2011

Blog: Creating Artificial Intelligence Based on the Real Thing

Creating Artificial Intelligence Based on the Real Thing
New York Times (12/05/11) Steve Lohr

Researchers from Cornell University, Columbia University, the University of Wisconsin, the University of California, Merced, and IBM are developing technology based on biological systems. The project recently received $21 million in funding from the U.S. Defense Advanced Research Projects Agency (DARPA), which helped lead to the development of prototype neurosynaptic microprocessors that function more like neurons and synapses than conventional semiconductors. The prototype chip has 256 neuron-like nodes, surrounded by more than 262,000 synaptic memory modules. A computer running the prototype chip has learned how to play the video game Pong and to identify the numbers one through 10 written by a human on a digital pad. The project aims to find designs, concepts, and techniques that might be borrowed from biology to push the limits of computing. The research is "the quest to engineer the mind by reverse-engineering the brain," says IBM's Dharmendra S. Modha. DARPA wants the project to produce technology that is self-organizing, able to learn instead of just responding to programming commands, and run on very little power. "It seems that we can build a computing architecture that is quite general-purpose and could be used for a large class of applications," says Cornell professor Rajit Manohar.

Thursday, November 17, 2011

Blog: Smart Swarms of Bacteria Inspire Robotics Researchers

Smart Swarms of Bacteria Inspire Robotics Researchers
American Friends of Tel Aviv University (11/17/11)

Tel Aviv University (TAU) researchers have developed a computational model that describes how bacteria move in a swarm, a discovery they say could be applied to computers, artificial intelligence, and robotics. The model shows how bacteria collectively gather information about their environment and find an optimal plan for growth. The research could enable scientists to design smart robots that can form intelligent swarms, help in the development of medical micro-robots, or de-code social network systems to find information on consumer preferences. "When an individual bacterium finds a more beneficial path, it pays less attention to the signals from the other cells, [and] since each of the cells adopts the same strategy, the group as a whole is able to find an optimal trajectory in an extremely complex terrain," says TAU Ph.D. student Adi Shklarsh. The model shows how a swarm can perform optimally with only simple computational abilities and short term memory, Shklarsh says. He notes that understanding the secrets of bacteria swarms can provide crucial hints toward the design of robots that are programmed to perform adjustable interactions without needing as much data or memory.

Tuesday, November 15, 2011

Blog: Mimicking the Brain, in Silicon

Mimicking the Brain, in Silicon
MIT News (11/15/11) Anne Trafton

Massachusetts Institute of Technology (MIT) researchers have designed a computer chip that mimics how the brain's neurons adapt in response to new information. The chip uses about 400 transistors to simulate the activity of a single brain synapse, helping neuroscientists learn more about how the brain works, according to MIT researcher Chi-Sang Poon. The researchers designed the chip so that the transistors could emulate the activity of different ion channels. Although most chips operate in a binary system, the new chip functions in an analog fashion. "We now have a way to capture each and every ionic process that's going on in a neuron," Poon says. The new chip represents a "significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto [complementary metal-oxide-semiconductor] chips," says University of California, Los Angeles professor Dean Buonomano. The researchers plan to use the chip to develop systems that model specific neural functions, such as the visual processing system. The chips also could be used to interface with biological systems.

Friday, November 11, 2011

Blog: Stanford Joins BrainGate Team Developing Brain-Computer Interface to Aid People With Paralysis

Stanford Joins BrainGate Team Developing Brain-Computer Interface to Aid People With Paralysis
Stanford University (11/11/11) Tanya Lewis

Stanford University researchers have joined the BrainGate research project, which is investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms, and other assistive devices. The project is based on technology developed by researchers at Brown and Harvard universities, Massachusetts General Hospital, and the Providence Veterans Affairs Medical Center. BrainGate is a hardware/software-based system that senses electrical signals in the brain that control movement. Computer algorithms translate the signals into instructions that enable users with paralysis to control external devices. "One of the biggest contributions that Stanford can offer is our expertise in algorithms to decode what the brain is doing and turn it into action," says Stanford's Jaimie Henderson. He is working with Stanford professor Krishna Shenoy, who is focusing on understanding how the brain controls movement and translating that knowledge into neural prosthetic systems controlled by software. "The BrainGate program has been a model of innovation and teamwork as it has taken the first giant steps toward turning potentially life-changing technology into a reality," Shenoy says. The researchers recently showed that the system allowed a patient to control a computer cursor more than 1,000 days after implementation.

Tuesday, October 25, 2011

Blog: How Revolutionary Tools Cracked a 1700s Code

How Revolutionary Tools Cracked a 1700s Code
New York Times (10/25/11) John Markoff

A cipher dating back to the 18th century that was considered uncrackable was finally decrypted by a team of Swedish and U.S. linguists by using statistics-based translation methods. After a false start, the team determined that the Copiale Cipher was a homophonic cipher and attempted to decode all the symbols in German, as the manuscript was originally discovered in Germany. Their first step was finding regularly occurring symbols that might stand for the common German pair "ch." Once a potential "c" and "h" were found, the researchers used patterns in German to decode the cipher one step at a time. Language translation techniques such as expected word frequency were used to guess a symbol's equivalent in German. However, there are other, more impenetrable ciphers that have thwarted even the translators of the Copiale Cipher. The Voynich manuscript has been categorized as the most frustrating of such ciphers, but one member of the team that cracked the Copiale manuscript, the University of Southern California's Kevin Knight, co-published an analysis of the Voynich document pointing to evidence that it contains patterns that match the structure of natural language.

Friday, September 23, 2011

Blog: New Mathematical Model to Enable Web Searches for Meaning

New Mathematical Model to Enable Web Searches for Meaning
University of Hertfordshire (09/23/11) Paige Upchurch

University of Hertfordshire computer scientist Daoud Clarke has developed a mathematical model based on a theory of meaning that could revolutionize artificial intelligence technologies and enable Web searches to interpret the meaning of queries. The model is based on the idea that the meaning of words and phrases is determined by the context in which they occur. "This is an old idea, with its origin in the philosophy of Wittgenstein, and was later taken up by linguists, but this is the first time that someone has used it to construct a comprehensive theory of meaning," Clarke says. The model provides a way to represent words and phrases as sequences of numbers, known as vectors. "Our theory tells you what the vector for a phrase should look like in terms of the vectors for the individual words that make up the phrase," Clarke says. "Representing meanings of words using vectors allows fuzzy relationships between words to be expressed as the distance or angle between the vectors." He says the model could be applied to new types of artificial intelligence, such as determining the exact nature of a particular Web query.

Friday, August 12, 2011

Blog: Robot 'Mission Impossible' Wins Video Prize

Robot 'Mission Impossible' Wins Video Prize
New Scientist (08/12/11) Melissae Fellet

Free University of Brussels researchers have developed Swarmanoid, a team of flying, rolling, and climbing robots that can work together to find and grab a book from a high shelf. The robot team includes flying eye-bots, rolling foot-bots, and hand-bots that can fire a grappling hook-like device up to the ceiling and climb the bookshelf. Footage of the team in action recently won the video competition at the Conference on Artificial Intelligence. The robotic team currently consists of 30 foot-bots, 10 eye-bots, and eight hand-bots. The eye-robots explore the rooms, searching for the target. After an eye-bot sees the target, it signals the foot-bots, which roll to the site, carrying the hand-bots. The hand-bots then launch the grappling hooks to the ceiling and climb the bookshelves. All of the bots have light-emitting diodes that flash different colors, enabling them to communicate with each other. Constant communication enables Swarmanoid to adjust its actions on the fly, compensating for broken bots by reassigning tasks throughout the team.

Wednesday, August 10, 2011

Blog: Researcher Teaches Computers to Detect Spam More Accurately

Researcher Teaches Computers to Detect Spam More Accurately
IDG News Service (08/10/11) Nicolas Zeitler

Georgia Tech researcher Nina Balcan recently received a Microsoft Research Faculty Fellowship for her work in developing machine learning methods that can be used to create personalized automatic programs for deciding whether an email is spam or not. Balcan's research also can be used to solve other data-mining problems. Using supervised learning, the user teaches the computer by submitting information on which emails are spam and which are not, which is very inefficient, according to Balcan. Active learning enables the computer to analyze huge collections of unlabeled emails to generate only a few questions for the user. Active learning could potentially deliver better results than supervised learning, Balcan says. However, active learning methods are highly sensitive to noise, making this potentially difficult to achieve. Balcan plans to develop an understanding of when, why, and how different kinds of learning protocols help. "My research connects machine learning, game theory, economics, and optimization," she says.

Blog: How Computational Complexity Will Revolutionize Philosophy

How Computational Complexity Will Revolutionize Philosophy
Technology Review (08/10/11)

Massachusetts Institute of Technology computer scientist Scott Aaronson argues that computational complexity theory will have a transformative effect on philosophical thinking about a broad spectrum of topics such as the challenge of artificial intelligence (AI). The theory focuses on how the resources required to solve a problem scale with some measure of the problem size, and how problems typically scale either reasonably slowly or unreasonably rapidly. Aaronson raises the issue of AI and whether computers can ever become capable of human-like thinking. He contends that computability theory cannot provide a fundamental impediment to computers passing the Turing test. A more productive strategy is to consider the problem's computational complexity, Aaronson says. He cites the possibility of a computer that records all the human-to-human conversations it hears, accruing a database over time with which it can make conversation by looking up human answers to questions it is presented with. Aaronson says that although this strategy works, it demands computational resources that expand exponentially with the length of the conversation. This, in turn, leads to a new way of thinking about the AI problem, and by this reasoning, the difference between humans and machines is basically one of computational complexity.

View Full Article

Wednesday, July 20, 2011

Blog: Caltech Researchers Create the First Artificial Neural Network Out of DNA

Caltech Researchers Create the First Artificial Neural Network Out of DNA
California Institute of Technology (07/20/11) Marcus Woo

California Institute of Technology (CalTech) researchers have developed an artificial neural network out of DNA, creating a circuit of interacting molecules that can recall memories based on incomplete information. The network, which consists of four artificial neurons made from 112 distinct strands of DNA, plays a mind-reading game in which it identifies a mystery scientist based on answering yes or no questions, such as whether the scientist is British. The network communicates its answers using fluorescent signals and was able to correctly identify the scientist in 100 percent of the 27 trials the researchers conducted. The DNA-based neural network can take an incomplete pattern and determine what it represents. The researchers say that biochemical systems with artificial intelligence could have applications in medicine, chemistry, and biological research. They based the network on a simple model of a neuron, known as a linear threshold function. "It has been an extremely productive model for exploring how the collective behavior of many simple computational elements can lead to brain-like behaviors, such as associative recall and pattern completion," says CalTech professor Erik Winfree.

View Full Article

Saturday, July 16, 2011

Blog: Internet's Memory Effects Quantified in Computer Study

Internet's Memory Effects Quantified in Computer Study
BBC News (07/16/11) Jason Palmer

Recent experiments have shown that computers and the Internet are changing the nature of human memory, as people presented with difficult questions began to think of computers. If the participants knew that the facts would be available on a computer later, they had poor recall of the answers but enhanced recall of where they were stored, according to the study, which described the Internet as serving as a transactive memory. Transactive memory "is an idea that there are external memory sources--really storage places that exist in other people," says Columbia University's Betsy Sparrow. The researchers used a modified Stroop test to study how people thought about difficult questions and whether they relied on computers for the answers. The researchers provided a stream of facts to participants, and half were told to file them away on a computer, and the other half were told the facts would be erased. Those who knew the information would not be available later performed significantly better than those who filed the information away. However, those who expected the information to be available were very good at remembering in which folder they had stored it.

View Full Article

Friday, July 15, 2011

Blog: Machines to Compare Notes Online?

Machines to Compare Notes Online?
AlphaGalileo (07/15/11)

Autonomous machines, networks, and robots should publish their own suggestions for upgrading the technology on the Internet, says the University of Southampton's Sandor Veres. Giving machines and systems a greater degree of self-control will be the best way to improve them in the future, but humans will be more likely to guide and trust them if their dialogue is transparent, Veres says. An autonomously operating technical system would have some modeling of a changing environment; learning of various skills in feedback interaction with the environment; symbolic recognition of events and actions to perform logic-based computation; the ability to explain reasons of own actions to humans; and efficient transfer of rules, goals, values, and skills from human users to the autonomous system. Veres says the natural language programming sEnglish system could be used to achieve the last three technical features. "The adoption of a 'publications for machines' approach can bring great practical benefits by making the business of building autonomous systems viable in some critical areas where a high degree of intelligence is needed and safety is paramount," Veres says.

View Full Article

Blog: Swarms of Locusts Use Social Networking to Communicate

Swarms of Locusts Use Social Networking to Communicate
Institute of Physics (07/15/11)

The swarming behavior of locusts is created by the same social networks that humans adopt, according to a study by researchers from the Max Planck Institute for Physics of Complex Systems and a U.S.-based scientist supported by the National Science Foundation. The researchers applied previous findings on opinion formation in social networks to an earlier study of 120 locust nymphs marching in a ring-shaped arena in the lab. Using a computer model that simulated the social network among locusts, the team found that the key component to reproducing the movements observed in the lab is the social interactions that occur when locusts, walking in one direction, convince others to follow them. Locusts create the equivalent of our human social networks, according to the researchers. "We concluded that the mechanism through which locusts agree on a direction to move together ... is the same we sometimes use to decide where to live or where to go out," says researcher Gerd Zschaler. "We let ourselves be convinced by those in our social network, often by those going in the opposite direction."

View Full Article

Tuesday, July 12, 2011

Blog: Computer Learns Language By Playing Games

Computer Learns Language By Playing Games
MIT News (07/12/11) Larry Hardesty

Massachusetts Institute of Technology professor Regina Marzilay has adapted a system she developed to generate scripts for installing software on a Windows computer based on postings from a Microsoft help site to learn to play the Civilization computer game. The goal of the project was to demonstrate that computer systems that learn the meanings of words through exploratory interaction with their environments have much potential and deserve further research. The system begins with no prior knowledge about the task or the language in which the instructions are written, making the initial behavior almost completely random. As the system takes various actions, different words appear on the screen. The system finds those words in the instructions and develops hypotheses about what those words mean, based on the surrounding text. The hypotheses that consistently lead to good results are referred back to more frequently, while the hypotheses that are proven unsuccessful are discarded. In the case of the computer game, the system won 72 percent more often than a version of the same system that did not use the written instructions, and 27 percent more frequently than an artificial intelligence-based system.

View Full Article

Blog Archive