Showing posts with label CSE. Show all posts
Showing posts with label CSE. Show all posts

Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, April 18, 2012

Blog: Finding ET May Require Giant Robotic Leap

Finding ET May Require Giant Robotic Leap
Penn State Live (04/18/12) Andrea Elyse Messer

Autonomous, self-replicating robots, known as exobots, are the best way to explore the universe, find and identify extraterrestrial life, and clean up space debris, says Pennsylvania State University professor John D. Mathews. "The basic premise is that human space exploration must be highly efficient, cost effective, and autonomous, as placing humans beyond low Earth orbit is fraught with political, economic, and technical difficulties," Mathews says. Developing and deploying self-replicating robots and advanced communications systems is the only way humans can effectively explore the asteroid belt and beyond, he maintains. The initial robots could be manufactured on the moon, taking advantage of the resources and its low gravity, both of which would reduce costs. The robots must be able to identify their exact location and the location of the other exobots, which would enable them to communicate using an infrared laser beam carrying data. Initially, the exobots would clear existing debris and monitor the more than 1,200 near-Earth asteroids that could be dangerous. In the future, he says a network of exobots could spread throughout the solar system and into the galaxy, using the resources they find there to continue their mission.

Blog: New Julia Language Seeks to Be the C for Scientists

New Julia Language Seeks to Be the C for Scientists
InfoWorld (04/18/12) Paul Krill

Massachusetts Institute of Technology (MIT) researchers have developed Julia, a programming language designed for building technical applications. Julia already has been used for image analysis and linear algebra research. MIT developer Stefan Karpinski notes that Julia is a dynamic language, which he says makes it easier to program because it has a very simple programming model. "One of our goals explicitly is to have sufficiently good performance in Julia that you'd never have to drop down into C," Karpinski adds. Julia also is designed for cloud computing and parallelism, according to the Julia Web page. The programming language provides a simpler model for building large parallel applications via a global distributed address space, Karpinski says. Julia also could be good at handling predictive analysis, modeling problems, and graph analysis problems. "Julia's LLVM-based just-in-time compiler, combined with the language's design, allows it to approach and often match the performance of C and C++," according to the Julia Web page.

Friday, April 13, 2012

Blog: Beyond Turing's Machines

Beyond Turing's Machines
Science (04/13/12) Vol. 336, No. 6078, P. 163 Andrew Hodges

Alan Turing's most profound achievement is arguably the principle of a universal machine that makes logic rather than arithmetic the computer's driving force, writes the University of Oxford's Andrew Hodges. Turing also defined the concept of computability, and suggested that mathematical steps that do not follow rules, and are thus not computable, could be identified with mental intuition. His 1950 treatise presented a basic argument that if the brain's action is computable, then it can be deployed on a computer or universal machine. Turing later suggested that modeling of the human brain might be impossible because of the nature of quantum mechanics, and his view of what is computable has not changed despite the advent of quantum computing. Many thought-experiment models investigate the implications of going beyond the constraints of the computable, and some require that machine elements operate with unlimited speed or permit unrestricted accuracy of measurement. Others more deeply explore the physical world's nature, with a focus of how mental operations relate to the physical brain and the need to rethink quantum mechanics because uncomputable physics is basic to physical law. Hodges says this way of thinking is part of Turing's legacy even though it superficially runs counter to his vision.

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 9, 2012

Blog: Transactional Memory: An Idea Ahead of Its Time

Transactional Memory: An Idea Ahead of Its Time
Brown University (04/09/12) Richard Lewis

Brown University researchers were studying theoretical transaction memory technologies, which attempts to seamlessly and concurrently handle shared revisions to information, about 20 years ago. Now those theories have become a reality. Intel recently announced that transactional memory will be included in its mainstream Haswell hardware architecture by next year, and IBM has adopted transactional memory in the Blue Gene/Q supercomputer. The problem that transaction memory aimed to solve is that core processors were changing in fundamental ways, says Brown professor Maurice Herlihy. Herlihy developed a system of requests and permissions in which operations are begun and logged in, but wholesale changes, or transactions, are not made before the system checks to be sure no other thread has suggested changes to the pending transaction as well. If no other changes have been requested, the transaction is consummated, but if there is another change request, the transaction is aborted and the threads start anew. Intel says its transactional memory is "hardware [that] can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required."

Saturday, April 7, 2012

Blog: Bits of Reality

Bits of Reality
Science News (04/07/12) Vol. 181, No. 7, P. 26 Tom Siegfried

Information derived from quantum computing systems could reveal subtle insights about the intersection between mathematics and the physical world. "We hope to be able to verify that these extraordinary computational resources in quantum systems really are part of the way nature behaves," says California Institute of Technology physicist John Preskill. "We could do so by solving a problem that we think is hard classically ... with a quantum computer, where we can easily verify with a classical computer that the quantum computer got the right answer." To solve certain hard problems that standard supercomputers cannot accommodate, such as finding the prime factors of very large numbers, quantum computers must process bits of quantum information. Quantum machines would only be workable for problems that could be posed as an algorithm amenable to the way quantum weirdness can eliminate wrong answers, allowing only the right answer to prevail. In 2011, the Perimeter Institute for Theoretical Physics' Giulio Chiribella and colleagues demonstrated how to derive quantum mechanics from a set of five axioms plus one postulate, all rooted in information theory terms. The foundation of their system is axioms such as causality, the notion that signals from the future cannot impact the present.

Tuesday, April 3, 2012

Blog: Programming Computers to Help Computer Programmers

Programming Computers to Help Computer Programmers
Rice University (04/03/12) Jade Boyd

Computer scientists from Rice University will participate in a project to create intelligent software agents that help people write code faster and with fewer errors. The Rice team will focus on robotic applications and how to verify that synthetic, computer-generated code is safe and effective, as part of the effort to develop automated program-synthesis tools for a variety of uses. "Programming is now done by experts only, and this needs to change if we are to use robots as helpers for humans," says Rice professor Lydia Kavraki. She also stresses that safety is critical. "You can only have robots help humans in a task--any task, whether mundane, dangerous, precise, or expensive--if you can guarantee that the behavior of the robot is going to be the expected one." The U.S. National Science Foundation is providing a $10 million grant to fund the five-year initiative, which is based at the University of Pennsylvania. Computer scientists at Rice and Penn have proposed a grand challenge robotic scenario of providing hospital staff with an automated program-synthesis tool for programming mobile robots to go from room to room, turn off lights, distribute medications, and remove medical waste.

Monday, April 2, 2012

Blog: To Convince People, Come at Them From Different Angles

To Convince People, Come at Them From Different Angles
Cornell Chronicle (04/02/12) Bill Steele

Cornell research on Facebook users' behavior demonstrates that people base decisions on the variety of social contexts rather than on the number of requests received. Social scientists previously envisioned the spread of ideas as similar to the spread of disease, but Cornell professor Jon Kleinberg says social contagion seems to be distinct from that model. "Each of us is sitting at a crossroads between the social circles we inhabit," he observes. "When a message comes at you from several directions, it may be more effective." The researchers worked with a database of 54 million email invitations from Facebook users inviting others to join the social network and analyzed the friendship links among inviters. The probability of a person joining increased with the number of different, unconnected social contexts represented. An analysis of the Facebook neighborhoods of 10 million new members seven days after joining identified clumps of friends linked to one another but not as much to people in other clumps. A follow-up check three months later found that people with more diverse clumps among their friends were more likely to be engaged. The researchers imply that mathematical models of how ideas proliferate across networks may require tweaking to account for the inclusion of neighborhood diversity.

Blog: Self-Sculpting Sand

Self-Sculpting Sand
MIT News (04/02/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers are developing a type of reconfigurable robotic system called smart sand. The individual sand grains pass messages back and forth and selectively attach to each other to form a three-dimensional object. MIT professor Daniela Rus says the biggest challenge in developing the smart sand algorithm is that the individual grains have very few computational resources. The grains first pass messages to each other to determine which have missing neighbors. Those with missing neighbors are either on the perimeter of the pile or the perimeter of the embedded shape. Once the grains surrounding the embedded shape identify themselves, they pass messages to other grains a fixed distance away. When the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors. The researchers built cubes, or “smart pebbles,” to test their algorithm. The cubes have four faces studded with electropermanent magnets, materials that can be magnetized or demagnetized with a single magnetic pulse. The grains use the magnets to connect to each other, to communicate, and to share power. Each grain also is equipped with a microprocessor that can store 32 kilobytes of code and has two kilobytes of working memory.

Blog: UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence
University of Massachusetts Amherst (04/02/12) Janet Lathrop

University of Massachusetts Amherst researchers are translating the "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment the same way human brains do. The model "is a mathematical formulation of the brain’s neural networks with their adaptive abilities," says Amherst computer scientist Hava Siegelmann. When the model is installed in a new environment, the new Super-Turing model results in an exponentially greater set of behaviors than the classical computer or the original Turing model. The researchers say the new Super-Turing machine will be flexible, adaptable, and economical. "The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," Siegelmann says.

Friday, March 30, 2012

Blog: Honeycombs of Magnets Could Lead to New Type of Computer Processing

Honeycombs of Magnets Could Lead to New Type of Computer Processing
Imperial College London (03/30/12) Simon Levey

Imperial College London researchers say they have developed a new material using nano-sized magnets that could lead to unique types of electronic devices with much greater processing capacity than current technologies. The researchers have shown that a honeycomb pattern of nano-sized magnets introduces competition between neighboring magnets and reduces the problems caused by these interactions by 66 percent. The researchers also found that large arrays of these nano-magnets can be used to store computable information. The research suggests that a cluster of many magnetic domains could be able to solve a complex computational problem in a single calculation. "Our philosophy is to harness the magnetic interactions, making them work in our favor," says Imperial College London researcher Will Branford. Previous studies have shown that external magnetic fields can cause the magnetic domain of each bar to ichange state, which affects the interaction between that bar and its two neighboring bars in the honeycomb. It is this pattern of magnetic states that could be computer data, according to Branford. "This is something we can take advantage of to compute complex problems because many different outcomes are possible, and we can differentiate between them electronically," he says.

Blog: Sanjeev Arora Named Winner of 2011 ACM-Infosys Award

Sanjeev Arora Named Winner of 2011 ACM-Infosys Award
CCC Blog (03/30/12) Erwin Gianchandani

Princeton University professor Sanjeev Arora has received the 2011 ACM-Infosys Foundation Award in Computing Sciences for his contributions to computational complexity, algorithms, and optimization. "Arora’s research revolutionized the approach to essentially unsolvable problems that have long bedeviled the computing field, the so-called NP-complete problems," according to an ACM-Infosys press release. Arora is an ACM Fellow and won the Gödel Prize in 2001 and 2010, as well as the ACM Doctoral Dissertation Award in 1995. Arora also is the founding director of Princeton's Center for Computational Intractability, which addresses the phenomenon that many problems seem inherently impossible to solve on currently computational models. "With his new tools and techniques, Arora has developed a fundamentally new way of thinking about how to solve problems,” says ACM President Alain Chesnais. “In particular, his work on the PCP theorem is considered the most important development in computational complexity theory in the last 30 years. He also perceived the practical applications of his work, which has moved computational theory into the realm of real world uses.” The ACM-Infosys Foundation Award recognizes personal contributions by young scientists and system developers to a contemporary innovation and includes a $175,000 prize.

Wednesday, March 28, 2012

Blog: Google Launches Go Programming Language 1.0

Google Launches Go Programming Language 1.0
eWeek (03/28/12) Darryl K. Taft

Google has released version 1.0 of its Go programming language, which was initially introduced as an experimental language in 2009. Google has described Go as an attempt to combine the development speed of working in a dynamic language such as Python with the performance and safety of a compiled language such as C or C++. "We're announcing Go version 1, or Go 1 for short, which defines a language and a set of core libraries to provide a stable foundation for creating reliable products, projects, and publications," says Google's Andrew Gerrand. He notes that Go 1 is the first release of Go that is available in supported binary distribution, identifying Linux, FreeBSD, Mac OS X, and Windows. Stability for users was the driving motivation for Go 1, and much of the work needed to bring programs up to the Go 1 standard can be automated with the go fix tool. A complete list of changes to the language and the standard library, documented in the Go 1 release notes, will be an essential reference for programmers who are migrating code from earlier versions of Go. There also is a new release of the Google App Engine SDK.

Tuesday, March 27, 2012

Blog: Algorithm Spells the End for Professional Musical Instrument Tuners

Algorithm Spells the End for Professional Musical Instrument Tuners
Technology Review (03/27/12)

University of Wurzburg researcher Hay Hinrichsen says he has developed an algorithm that makes it possible for electronic tuners to match the performance of the best human tuners. Hinrichsen's algorithm involves a process known as entropy minimization. First, Hinrichsen uses the equal temperament method and then divides the audio spectrum with a resolution that matches the human ear. The method then measures the entropy in the system and applies a small random change to the frequency of a note and measures the entropy again. Hinrichsen says the algorithm is comparable to the work of a professional tuner. He notes that the software can be added to the features of relatively inexpensive electronic tuners. "The implementation of the method is very easy," Hinrichsen says.

Monday, March 26, 2012

Blog: Robots to Organise Themselves Like a Swarm of Insects

Robots to Organise Themselves Like a Swarm of Insects
The Engineer (United Kingdom) (03/26/12)

A swarm of insects is the inspiration for a warehouse transport system that makes use of autonomous robotic vehicles. Researchers at the Fraunhofer Institute for Material Flow and Logistics (IML) have developed autonomous Multishuttle Moves vehicles to organize themselves like insects. The team is testing 50 shuttles at a replica warehouse. When an order is received, the shuttles communicate with one another via a wireless Internet connection and the closest free vehicle takes over and completes the task. "We rely on agent-based software and use ant algorithms based on the work of [swarm robotics expert] Marco Dorigo," says IML's Thomas Albrecht. The vehicles move around using a hybrid sensor concept based on radio signals, distance and acceleration sensors, and laser sensors to calculate the shortest route to any destination and avoid collisions. Albrecht says the system is more flexible and scalable because it can be easily adapted for smaller or larger areas based on changes in demand. "In the future, transport systems should be able to perform all of these tasks autonomously, from removal from storage at the shelf to delivery to a picking station," says IML professor Michael ten Hompel.

Thursday, March 15, 2012

Blog: ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence

ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence
PC Magazine (03/15/12) Michael J. Miller

ACM announced that University of California, Los Angeles professor Judea Pearl is this year's winner of the A.M. Turing Award for his work on artificial intelligence. The award, considered the highest honor in computer science, recognizes Pearl for devising a framework for reasoning with imperfect data that has changed the strategy for real-world problem solving. ACM executive director John White says Pearl was singled out for work that "was instrumental in moving machine-based reasoning from the rules-bound expert systems of the 1980s to a calculus that incorporates uncertainty and probabilistic models." Pearl worked out techniques for attempting to reach the best conclusion, even when there is a level of uncertainty. Internet pioneer Vinton Cerf says Pearl's research "is applicable to an extremely wide range of applications in which only partial information is available to draw upon to reach conclusions." He also says the successful business models of companies that search the Internet owe a debt to Pearl's work. Pearl generated the framework for Bayesian networks, which provides a compact method for representing probability distributions. This framework has played a substantial role in reshaping approaches to machine learning, which currently has a heavy reliance on probabilistic and statistical inference, and which underlies most recognition, fault diagnosis, and machine-translation systems.

Wednesday, March 14, 2012

Blog: Mario Is Hard, and That's Mathematically Official

Mario Is Hard, and That's Mathematically Official
New Scientist (03/14/12) Jacob Aron

Massachusetts Institute of Technology (MIT) researchers recently analyzed the computational complexity of video games and found that many of them belong to a class of mathematical problems called NP-hard. The implication is that for a given game level, it can be very tough to determine whether it is possible for a player to reach the end. The results suggest that some hard problems could be solved by playing a game. The researchers, led by MIT's Erik Demaine, converted each game into a Boolean satisfiability problem, which asks whether the variables in a collection of logical statements can be chosen to make all the statements true, or whether the statements inevitably contradict each other. For each game, the team built sections of a level that force players to choose one of two paths, which are equal to assigning variables in the Boolean satisfiability problem. If they permit the completion of a level, that is equivalent to all of the statements in the Boolean problem being true. However, if they make completion impossible, it is equal to a contradiction. Many of the games proved to be NP-hard, which means that deciding whether a player can complete them is at least as difficult as the hardest problems in NP.

Blog: Researchers Send 'Wireless' Message Using Elusive Particles

Researchers Send 'Wireless' Message Using Elusive Particles
University of Rochester News (03/14/12) Peter Iglinski

Researchers at the University of Rochester and North Carolina State University (NCSU) say they have sent a message using a beam of neutrinos. "Using neutrinos, it would be possible to communicate between any two points on Earth without using satellites or cables," says NCSU professor Dan Stancil. Technology that uses neutrinos to send messages enables them to penetrate almost any material they encounter. The researchers used one of the world's most powerful particle accelerators and MINERvA, a multi-ton detector located 100 miles underground. The researchers note that significant work still needs to be done before the technology can be incorporated into a readily usable form. The message was translated into binary code, with the 1's corresponding to a group of neutrinos being fired and the 0's corresponding to no neutrinos being fired. The neutrinos were fired in large groups because they are so evasive that only about one in 10 billion neutrinos are detected. "Neutrinos have been an amazing tool to help us learn about the workings of both the nucleus and the universe, but neutrino communication has a long way to go before it will be as effective," says MINERvA's Deborah Harris.

Monday, March 12, 2012

Blog: Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter

Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter
UC Berkeley News Center (03/12/12) Yasmin Anwar

University of California, Berkeley researchers are studying how babies, toddlers, and preschoolers learn in order to program computers to think more like humans. The researchers say computational models based on the brainpower of young children could give a major boost to artificial intelligence research. "Children are the greatest learning machines in the universe," says Berkeley's Alison Gopnik. "Imagine if computers could learn as much and as quickly as they do." The researchers have found that children test hypotheses, detect statistical patterns, and form conclusions while constantly adapting to changes. “Young children are capable of solving problems that still pose a challenge for computers, such as learning languages and figuring out causal relationships,” says Berkeley's Tom Griffiths. The researchers say computers programmed with children's cognitive abilities could interact more intelligently and responsively with humans in applications such as computer tutoring programs and phone-answering robots. They are planning to launch a multidisciplinary center at the campus' Institute of Human Development to pursue their research. The researchers note that the exploratory and probabilistic reasoning demonstrated by young children could make computers smarter and more adaptable.

Blog Archive