Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, April 18, 2012

Blog: Finding ET May Require Giant Robotic Leap

Finding ET May Require Giant Robotic Leap
Penn State Live (04/18/12) Andrea Elyse Messer

Autonomous, self-replicating robots, known as exobots, are the best way to explore the universe, find and identify extraterrestrial life, and clean up space debris, says Pennsylvania State University professor John D. Mathews. "The basic premise is that human space exploration must be highly efficient, cost effective, and autonomous, as placing humans beyond low Earth orbit is fraught with political, economic, and technical difficulties," Mathews says. Developing and deploying self-replicating robots and advanced communications systems is the only way humans can effectively explore the asteroid belt and beyond, he maintains. The initial robots could be manufactured on the moon, taking advantage of the resources and its low gravity, both of which would reduce costs. The robots must be able to identify their exact location and the location of the other exobots, which would enable them to communicate using an infrared laser beam carrying data. Initially, the exobots would clear existing debris and monitor the more than 1,200 near-Earth asteroids that could be dangerous. In the future, he says a network of exobots could spread throughout the solar system and into the galaxy, using the resources they find there to continue their mission.

Blog: New Julia Language Seeks to Be the C for Scientists

New Julia Language Seeks to Be the C for Scientists
InfoWorld (04/18/12) Paul Krill

Massachusetts Institute of Technology (MIT) researchers have developed Julia, a programming language designed for building technical applications. Julia already has been used for image analysis and linear algebra research. MIT developer Stefan Karpinski notes that Julia is a dynamic language, which he says makes it easier to program because it has a very simple programming model. "One of our goals explicitly is to have sufficiently good performance in Julia that you'd never have to drop down into C," Karpinski adds. Julia also is designed for cloud computing and parallelism, according to the Julia Web page. The programming language provides a simpler model for building large parallel applications via a global distributed address space, Karpinski says. Julia also could be good at handling predictive analysis, modeling problems, and graph analysis problems. "Julia's LLVM-based just-in-time compiler, combined with the language's design, allows it to approach and often match the performance of C and C++," according to the Julia Web page.

Monday, April 16, 2012

Blog: Fast Data hits the Big Data fast lane

Fast Data hits the Big Data fast lane

By Andrew Brust | April 16, 2012, 6:00am PDT
Summary: Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst at Ovum.

By Tony Baer

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Friday, April 13, 2012

Blog: Beyond Turing's Machines

Beyond Turing's Machines
Science (04/13/12) Vol. 336, No. 6078, P. 163 Andrew Hodges

Alan Turing's most profound achievement is arguably the principle of a universal machine that makes logic rather than arithmetic the computer's driving force, writes the University of Oxford's Andrew Hodges. Turing also defined the concept of computability, and suggested that mathematical steps that do not follow rules, and are thus not computable, could be identified with mental intuition. His 1950 treatise presented a basic argument that if the brain's action is computable, then it can be deployed on a computer or universal machine. Turing later suggested that modeling of the human brain might be impossible because of the nature of quantum mechanics, and his view of what is computable has not changed despite the advent of quantum computing. Many thought-experiment models investigate the implications of going beyond the constraints of the computable, and some require that machine elements operate with unlimited speed or permit unrestricted accuracy of measurement. Others more deeply explore the physical world's nature, with a focus of how mental operations relate to the physical brain and the need to rethink quantum mechanics because uncomputable physics is basic to physical law. Hodges says this way of thinking is part of Turing's legacy even though it superficially runs counter to his vision.

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 9, 2012

Blog: Transactional Memory: An Idea Ahead of Its Time

Transactional Memory: An Idea Ahead of Its Time
Brown University (04/09/12) Richard Lewis

Brown University researchers were studying theoretical transaction memory technologies, which attempts to seamlessly and concurrently handle shared revisions to information, about 20 years ago. Now those theories have become a reality. Intel recently announced that transactional memory will be included in its mainstream Haswell hardware architecture by next year, and IBM has adopted transactional memory in the Blue Gene/Q supercomputer. The problem that transaction memory aimed to solve is that core processors were changing in fundamental ways, says Brown professor Maurice Herlihy. Herlihy developed a system of requests and permissions in which operations are begun and logged in, but wholesale changes, or transactions, are not made before the system checks to be sure no other thread has suggested changes to the pending transaction as well. If no other changes have been requested, the transaction is consummated, but if there is another change request, the transaction is aborted and the threads start anew. Intel says its transactional memory is "hardware [that] can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required."

Saturday, April 7, 2012

Blog: Bits of Reality

Bits of Reality
Science News (04/07/12) Vol. 181, No. 7, P. 26 Tom Siegfried

Information derived from quantum computing systems could reveal subtle insights about the intersection between mathematics and the physical world. "We hope to be able to verify that these extraordinary computational resources in quantum systems really are part of the way nature behaves," says California Institute of Technology physicist John Preskill. "We could do so by solving a problem that we think is hard classically ... with a quantum computer, where we can easily verify with a classical computer that the quantum computer got the right answer." To solve certain hard problems that standard supercomputers cannot accommodate, such as finding the prime factors of very large numbers, quantum computers must process bits of quantum information. Quantum machines would only be workable for problems that could be posed as an algorithm amenable to the way quantum weirdness can eliminate wrong answers, allowing only the right answer to prevail. In 2011, the Perimeter Institute for Theoretical Physics' Giulio Chiribella and colleagues demonstrated how to derive quantum mechanics from a set of five axioms plus one postulate, all rooted in information theory terms. The foundation of their system is axioms such as causality, the notion that signals from the future cannot impact the present.

Blog: Berkeley Group Digs In to Challenge of Making Sense of All That Data

Berkeley Group Digs In to Challenge of Making Sense of All That Data
New York Times (04/07/12) Jeanne Carstensen

The U.S. National Science Foundation recently awarded $10 million to the University of California, Berkeley's Algorithms Machines People (AMP) Expedition, a research team that takes an interdisciplinary approach to advancing big data analysis. Researchers at the AMP Expedition, in collaboration with researchers at the University of California, San Francisco, are developing a set of open source tools for big data analysis. "We’ll judge our success by whether we build a new paradigm of data," says AMP Expedition director Michael Franklin. “It’s easier to collect data, and harder to make sense of it.” The grant is part of the Obama administration's "Big Data Research and Development Initiative," which will eventually distribute a total of $200 million. AMP Expedition faculty member Ken Goldberg has developed Opinion Space, a tool for online discussion and brainstorming that uses algorithms and data visualization tools to help gather meaningful ideas from a large number of participants. Goldberg notes that part of their research focus is analyzing how people interact with big data. “We recognize that humans do play an important part in the system,” he says.

Tuesday, April 3, 2012

Blog: Programming Computers to Help Computer Programmers

Programming Computers to Help Computer Programmers
Rice University (04/03/12) Jade Boyd

Computer scientists from Rice University will participate in a project to create intelligent software agents that help people write code faster and with fewer errors. The Rice team will focus on robotic applications and how to verify that synthetic, computer-generated code is safe and effective, as part of the effort to develop automated program-synthesis tools for a variety of uses. "Programming is now done by experts only, and this needs to change if we are to use robots as helpers for humans," says Rice professor Lydia Kavraki. She also stresses that safety is critical. "You can only have robots help humans in a task--any task, whether mundane, dangerous, precise, or expensive--if you can guarantee that the behavior of the robot is going to be the expected one." The U.S. National Science Foundation is providing a $10 million grant to fund the five-year initiative, which is based at the University of Pennsylvania. Computer scientists at Rice and Penn have proposed a grand challenge robotic scenario of providing hospital staff with an automated program-synthesis tool for programming mobile robots to go from room to room, turn off lights, distribute medications, and remove medical waste.

Monday, April 2, 2012

Blog: To Convince People, Come at Them From Different Angles

To Convince People, Come at Them From Different Angles
Cornell Chronicle (04/02/12) Bill Steele

Cornell research on Facebook users' behavior demonstrates that people base decisions on the variety of social contexts rather than on the number of requests received. Social scientists previously envisioned the spread of ideas as similar to the spread of disease, but Cornell professor Jon Kleinberg says social contagion seems to be distinct from that model. "Each of us is sitting at a crossroads between the social circles we inhabit," he observes. "When a message comes at you from several directions, it may be more effective." The researchers worked with a database of 54 million email invitations from Facebook users inviting others to join the social network and analyzed the friendship links among inviters. The probability of a person joining increased with the number of different, unconnected social contexts represented. An analysis of the Facebook neighborhoods of 10 million new members seven days after joining identified clumps of friends linked to one another but not as much to people in other clumps. A follow-up check three months later found that people with more diverse clumps among their friends were more likely to be engaged. The researchers imply that mathematical models of how ideas proliferate across networks may require tweaking to account for the inclusion of neighborhood diversity.

Blog: Self-Sculpting Sand

Self-Sculpting Sand
MIT News (04/02/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers are developing a type of reconfigurable robotic system called smart sand. The individual sand grains pass messages back and forth and selectively attach to each other to form a three-dimensional object. MIT professor Daniela Rus says the biggest challenge in developing the smart sand algorithm is that the individual grains have very few computational resources. The grains first pass messages to each other to determine which have missing neighbors. Those with missing neighbors are either on the perimeter of the pile or the perimeter of the embedded shape. Once the grains surrounding the embedded shape identify themselves, they pass messages to other grains a fixed distance away. When the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors. The researchers built cubes, or “smart pebbles,” to test their algorithm. The cubes have four faces studded with electropermanent magnets, materials that can be magnetized or demagnetized with a single magnetic pulse. The grains use the magnets to connect to each other, to communicate, and to share power. Each grain also is equipped with a microprocessor that can store 32 kilobytes of code and has two kilobytes of working memory.

Blog: UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence
University of Massachusetts Amherst (04/02/12) Janet Lathrop

University of Massachusetts Amherst researchers are translating the "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment the same way human brains do. The model "is a mathematical formulation of the brain’s neural networks with their adaptive abilities," says Amherst computer scientist Hava Siegelmann. When the model is installed in a new environment, the new Super-Turing model results in an exponentially greater set of behaviors than the classical computer or the original Turing model. The researchers say the new Super-Turing machine will be flexible, adaptable, and economical. "The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," Siegelmann says.

Blog Archive