Tech Notes

Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, April 18, 2012

Blog: Finding ET May Require Giant Robotic Leap

Finding ET May Require Giant Robotic Leap
Penn State Live (04/18/12) Andrea Elyse Messer

Autonomous, self-replicating robots, known as exobots, are the best way to explore the universe, find and identify extraterrestrial life, and clean up space debris, says Pennsylvania State University professor John D. Mathews. "The basic premise is that human space exploration must be highly efficient, cost effective, and autonomous, as placing humans beyond low Earth orbit is fraught with political, economic, and technical difficulties," Mathews says. Developing and deploying self-replicating robots and advanced communications systems is the only way humans can effectively explore the asteroid belt and beyond, he maintains. The initial robots could be manufactured on the moon, taking advantage of the resources and its low gravity, both of which would reduce costs. The robots must be able to identify their exact location and the location of the other exobots, which would enable them to communicate using an infrared laser beam carrying data. Initially, the exobots would clear existing debris and monitor the more than 1,200 near-Earth asteroids that could be dangerous. In the future, he says a network of exobots could spread throughout the solar system and into the galaxy, using the resources they find there to continue their mission.

Blog: New Julia Language Seeks to Be the C for Scientists

New Julia Language Seeks to Be the C for Scientists
InfoWorld (04/18/12) Paul Krill

Massachusetts Institute of Technology (MIT) researchers have developed Julia, a programming language designed for building technical applications. Julia already has been used for image analysis and linear algebra research. MIT developer Stefan Karpinski notes that Julia is a dynamic language, which he says makes it easier to program because it has a very simple programming model. "One of our goals explicitly is to have sufficiently good performance in Julia that you'd never have to drop down into C," Karpinski adds. Julia also is designed for cloud computing and parallelism, according to the Julia Web page. The programming language provides a simpler model for building large parallel applications via a global distributed address space, Karpinski says. Julia also could be good at handling predictive analysis, modeling problems, and graph analysis problems. "Julia's LLVM-based just-in-time compiler, combined with the language's design, allows it to approach and often match the performance of C and C++," according to the Julia Web page.

Monday, April 16, 2012

Blog: Fast Data hits the Big Data fast lane

Fast Data hits the Big Data fast lane

By Andrew Brust | April 16, 2012, 6:00am PDT
Summary: Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst at Ovum.

By Tony Baer

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Friday, April 13, 2012

Blog: Beyond Turing's Machines

Beyond Turing's Machines
Science (04/13/12) Vol. 336, No. 6078, P. 163 Andrew Hodges

Alan Turing's most profound achievement is arguably the principle of a universal machine that makes logic rather than arithmetic the computer's driving force, writes the University of Oxford's Andrew Hodges. Turing also defined the concept of computability, and suggested that mathematical steps that do not follow rules, and are thus not computable, could be identified with mental intuition. His 1950 treatise presented a basic argument that if the brain's action is computable, then it can be deployed on a computer or universal machine. Turing later suggested that modeling of the human brain might be impossible because of the nature of quantum mechanics, and his view of what is computable has not changed despite the advent of quantum computing. Many thought-experiment models investigate the implications of going beyond the constraints of the computable, and some require that machine elements operate with unlimited speed or permit unrestricted accuracy of measurement. Others more deeply explore the physical world's nature, with a focus of how mental operations relate to the physical brain and the need to rethink quantum mechanics because uncomputable physics is basic to physical law. Hodges says this way of thinking is part of Turing's legacy even though it superficially runs counter to his vision.

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 9, 2012

Blog: Transactional Memory: An Idea Ahead of Its Time

Transactional Memory: An Idea Ahead of Its Time
Brown University (04/09/12) Richard Lewis

Brown University researchers were studying theoretical transaction memory technologies, which attempts to seamlessly and concurrently handle shared revisions to information, about 20 years ago. Now those theories have become a reality. Intel recently announced that transactional memory will be included in its mainstream Haswell hardware architecture by next year, and IBM has adopted transactional memory in the Blue Gene/Q supercomputer. The problem that transaction memory aimed to solve is that core processors were changing in fundamental ways, says Brown professor Maurice Herlihy. Herlihy developed a system of requests and permissions in which operations are begun and logged in, but wholesale changes, or transactions, are not made before the system checks to be sure no other thread has suggested changes to the pending transaction as well. If no other changes have been requested, the transaction is consummated, but if there is another change request, the transaction is aborted and the threads start anew. Intel says its transactional memory is "hardware [that] can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required."

Blog Archive