Showing posts with label research. Show all posts
Showing posts with label research. Show all posts

Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, April 18, 2012

Blog: Finding ET May Require Giant Robotic Leap

Finding ET May Require Giant Robotic Leap
Penn State Live (04/18/12) Andrea Elyse Messer

Autonomous, self-replicating robots, known as exobots, are the best way to explore the universe, find and identify extraterrestrial life, and clean up space debris, says Pennsylvania State University professor John D. Mathews. "The basic premise is that human space exploration must be highly efficient, cost effective, and autonomous, as placing humans beyond low Earth orbit is fraught with political, economic, and technical difficulties," Mathews says. Developing and deploying self-replicating robots and advanced communications systems is the only way humans can effectively explore the asteroid belt and beyond, he maintains. The initial robots could be manufactured on the moon, taking advantage of the resources and its low gravity, both of which would reduce costs. The robots must be able to identify their exact location and the location of the other exobots, which would enable them to communicate using an infrared laser beam carrying data. Initially, the exobots would clear existing debris and monitor the more than 1,200 near-Earth asteroids that could be dangerous. In the future, he says a network of exobots could spread throughout the solar system and into the galaxy, using the resources they find there to continue their mission.

Monday, April 16, 2012

Blog: Fast Data hits the Big Data fast lane

Fast Data hits the Big Data fast lane

By Andrew Brust | April 16, 2012, 6:00am PDT
Summary: Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst at Ovum.

By Tony Baer

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Friday, April 13, 2012

Blog: Beyond Turing's Machines

Beyond Turing's Machines
Science (04/13/12) Vol. 336, No. 6078, P. 163 Andrew Hodges

Alan Turing's most profound achievement is arguably the principle of a universal machine that makes logic rather than arithmetic the computer's driving force, writes the University of Oxford's Andrew Hodges. Turing also defined the concept of computability, and suggested that mathematical steps that do not follow rules, and are thus not computable, could be identified with mental intuition. His 1950 treatise presented a basic argument that if the brain's action is computable, then it can be deployed on a computer or universal machine. Turing later suggested that modeling of the human brain might be impossible because of the nature of quantum mechanics, and his view of what is computable has not changed despite the advent of quantum computing. Many thought-experiment models investigate the implications of going beyond the constraints of the computable, and some require that machine elements operate with unlimited speed or permit unrestricted accuracy of measurement. Others more deeply explore the physical world's nature, with a focus of how mental operations relate to the physical brain and the need to rethink quantum mechanics because uncomputable physics is basic to physical law. Hodges says this way of thinking is part of Turing's legacy even though it superficially runs counter to his vision.

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 9, 2012

Blog: Transactional Memory: An Idea Ahead of Its Time

Transactional Memory: An Idea Ahead of Its Time
Brown University (04/09/12) Richard Lewis

Brown University researchers were studying theoretical transaction memory technologies, which attempts to seamlessly and concurrently handle shared revisions to information, about 20 years ago. Now those theories have become a reality. Intel recently announced that transactional memory will be included in its mainstream Haswell hardware architecture by next year, and IBM has adopted transactional memory in the Blue Gene/Q supercomputer. The problem that transaction memory aimed to solve is that core processors were changing in fundamental ways, says Brown professor Maurice Herlihy. Herlihy developed a system of requests and permissions in which operations are begun and logged in, but wholesale changes, or transactions, are not made before the system checks to be sure no other thread has suggested changes to the pending transaction as well. If no other changes have been requested, the transaction is consummated, but if there is another change request, the transaction is aborted and the threads start anew. Intel says its transactional memory is "hardware [that] can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required."

Saturday, April 7, 2012

Blog: Bits of Reality

Bits of Reality
Science News (04/07/12) Vol. 181, No. 7, P. 26 Tom Siegfried

Information derived from quantum computing systems could reveal subtle insights about the intersection between mathematics and the physical world. "We hope to be able to verify that these extraordinary computational resources in quantum systems really are part of the way nature behaves," says California Institute of Technology physicist John Preskill. "We could do so by solving a problem that we think is hard classically ... with a quantum computer, where we can easily verify with a classical computer that the quantum computer got the right answer." To solve certain hard problems that standard supercomputers cannot accommodate, such as finding the prime factors of very large numbers, quantum computers must process bits of quantum information. Quantum machines would only be workable for problems that could be posed as an algorithm amenable to the way quantum weirdness can eliminate wrong answers, allowing only the right answer to prevail. In 2011, the Perimeter Institute for Theoretical Physics' Giulio Chiribella and colleagues demonstrated how to derive quantum mechanics from a set of five axioms plus one postulate, all rooted in information theory terms. The foundation of their system is axioms such as causality, the notion that signals from the future cannot impact the present.

Blog: Berkeley Group Digs In to Challenge of Making Sense of All That Data

Berkeley Group Digs In to Challenge of Making Sense of All That Data
New York Times (04/07/12) Jeanne Carstensen

The U.S. National Science Foundation recently awarded $10 million to the University of California, Berkeley's Algorithms Machines People (AMP) Expedition, a research team that takes an interdisciplinary approach to advancing big data analysis. Researchers at the AMP Expedition, in collaboration with researchers at the University of California, San Francisco, are developing a set of open source tools for big data analysis. "We’ll judge our success by whether we build a new paradigm of data," says AMP Expedition director Michael Franklin. “It’s easier to collect data, and harder to make sense of it.” The grant is part of the Obama administration's "Big Data Research and Development Initiative," which will eventually distribute a total of $200 million. AMP Expedition faculty member Ken Goldberg has developed Opinion Space, a tool for online discussion and brainstorming that uses algorithms and data visualization tools to help gather meaningful ideas from a large number of participants. Goldberg notes that part of their research focus is analyzing how people interact with big data. “We recognize that humans do play an important part in the system,” he says.

Tuesday, April 3, 2012

Blog: Programming Computers to Help Computer Programmers

Programming Computers to Help Computer Programmers
Rice University (04/03/12) Jade Boyd

Computer scientists from Rice University will participate in a project to create intelligent software agents that help people write code faster and with fewer errors. The Rice team will focus on robotic applications and how to verify that synthetic, computer-generated code is safe and effective, as part of the effort to develop automated program-synthesis tools for a variety of uses. "Programming is now done by experts only, and this needs to change if we are to use robots as helpers for humans," says Rice professor Lydia Kavraki. She also stresses that safety is critical. "You can only have robots help humans in a task--any task, whether mundane, dangerous, precise, or expensive--if you can guarantee that the behavior of the robot is going to be the expected one." The U.S. National Science Foundation is providing a $10 million grant to fund the five-year initiative, which is based at the University of Pennsylvania. Computer scientists at Rice and Penn have proposed a grand challenge robotic scenario of providing hospital staff with an automated program-synthesis tool for programming mobile robots to go from room to room, turn off lights, distribute medications, and remove medical waste.

Monday, April 2, 2012

Blog: To Convince People, Come at Them From Different Angles

To Convince People, Come at Them From Different Angles
Cornell Chronicle (04/02/12) Bill Steele

Cornell research on Facebook users' behavior demonstrates that people base decisions on the variety of social contexts rather than on the number of requests received. Social scientists previously envisioned the spread of ideas as similar to the spread of disease, but Cornell professor Jon Kleinberg says social contagion seems to be distinct from that model. "Each of us is sitting at a crossroads between the social circles we inhabit," he observes. "When a message comes at you from several directions, it may be more effective." The researchers worked with a database of 54 million email invitations from Facebook users inviting others to join the social network and analyzed the friendship links among inviters. The probability of a person joining increased with the number of different, unconnected social contexts represented. An analysis of the Facebook neighborhoods of 10 million new members seven days after joining identified clumps of friends linked to one another but not as much to people in other clumps. A follow-up check three months later found that people with more diverse clumps among their friends were more likely to be engaged. The researchers imply that mathematical models of how ideas proliferate across networks may require tweaking to account for the inclusion of neighborhood diversity.

Blog: Self-Sculpting Sand

Self-Sculpting Sand
MIT News (04/02/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers are developing a type of reconfigurable robotic system called smart sand. The individual sand grains pass messages back and forth and selectively attach to each other to form a three-dimensional object. MIT professor Daniela Rus says the biggest challenge in developing the smart sand algorithm is that the individual grains have very few computational resources. The grains first pass messages to each other to determine which have missing neighbors. Those with missing neighbors are either on the perimeter of the pile or the perimeter of the embedded shape. Once the grains surrounding the embedded shape identify themselves, they pass messages to other grains a fixed distance away. When the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors. The researchers built cubes, or “smart pebbles,” to test their algorithm. The cubes have four faces studded with electropermanent magnets, materials that can be magnetized or demagnetized with a single magnetic pulse. The grains use the magnets to connect to each other, to communicate, and to share power. Each grain also is equipped with a microprocessor that can store 32 kilobytes of code and has two kilobytes of working memory.

Blog: UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence
University of Massachusetts Amherst (04/02/12) Janet Lathrop

University of Massachusetts Amherst researchers are translating the "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment the same way human brains do. The model "is a mathematical formulation of the brain’s neural networks with their adaptive abilities," says Amherst computer scientist Hava Siegelmann. When the model is installed in a new environment, the new Super-Turing model results in an exponentially greater set of behaviors than the classical computer or the original Turing model. The researchers say the new Super-Turing machine will be flexible, adaptable, and economical. "The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," Siegelmann says.

Friday, March 30, 2012

Blog: Honeycombs of Magnets Could Lead to New Type of Computer Processing

Honeycombs of Magnets Could Lead to New Type of Computer Processing
Imperial College London (03/30/12) Simon Levey

Imperial College London researchers say they have developed a new material using nano-sized magnets that could lead to unique types of electronic devices with much greater processing capacity than current technologies. The researchers have shown that a honeycomb pattern of nano-sized magnets introduces competition between neighboring magnets and reduces the problems caused by these interactions by 66 percent. The researchers also found that large arrays of these nano-magnets can be used to store computable information. The research suggests that a cluster of many magnetic domains could be able to solve a complex computational problem in a single calculation. "Our philosophy is to harness the magnetic interactions, making them work in our favor," says Imperial College London researcher Will Branford. Previous studies have shown that external magnetic fields can cause the magnetic domain of each bar to ichange state, which affects the interaction between that bar and its two neighboring bars in the honeycomb. It is this pattern of magnetic states that could be computer data, according to Branford. "This is something we can take advantage of to compute complex problems because many different outcomes are possible, and we can differentiate between them electronically," he says.

Monday, March 26, 2012

Blog: Robots to Organise Themselves Like a Swarm of Insects

Robots to Organise Themselves Like a Swarm of Insects
The Engineer (United Kingdom) (03/26/12)

A swarm of insects is the inspiration for a warehouse transport system that makes use of autonomous robotic vehicles. Researchers at the Fraunhofer Institute for Material Flow and Logistics (IML) have developed autonomous Multishuttle Moves vehicles to organize themselves like insects. The team is testing 50 shuttles at a replica warehouse. When an order is received, the shuttles communicate with one another via a wireless Internet connection and the closest free vehicle takes over and completes the task. "We rely on agent-based software and use ant algorithms based on the work of [swarm robotics expert] Marco Dorigo," says IML's Thomas Albrecht. The vehicles move around using a hybrid sensor concept based on radio signals, distance and acceleration sensors, and laser sensors to calculate the shortest route to any destination and avoid collisions. Albrecht says the system is more flexible and scalable because it can be easily adapted for smaller or larger areas based on changes in demand. "In the future, transport systems should be able to perform all of these tasks autonomously, from removal from storage at the shelf to delivery to a picking station," says IML professor Michael ten Hompel.

Wednesday, March 21, 2012

Blog: Computer Model of Spread of Dementia Can Predict Future Disease Patterns Years Before They Occur in a Patient

Computer Model of Spread of Dementia Can Predict Future Disease Patterns Years Before They Occur in a Patient
Cornell News (03/21/12) Richard Pietzak

Weill Cornell Medical College researchers have developed software that tracks the manner in which different forms of dementia spread within a human brain. The model can be used to predict where and when a person's brain will suffer from the spread of toxic proteins, a process that underlies all forms of dementia. The findings could help patients and their families confirm a diagnosis of dementia and prepare in advance for future cognitive declines over time. "Our model, when applied to the baseline magnetic resonance imaging scan of an individual brain, can similarly produce a future map of degeneration in that person over the next few years or decades," says Cornell's Ashish Raj. The computational model validates the idea that dementia is caused by proteins that spread through the brain along networks of neurons. Raj says the program models the same process by which any gas diffuses in air, except that in the case of dementia, the diffusion process occurs along connected neural fiber tracts in the brain. "While the classic patterns of dementia are well known, this is the first model to relate brain network properties to the patterns and explain them in a deterministic and predictive manner," he says.

Thursday, March 15, 2012

Blog: 'Big Data' Emerges as Key Theme at South by Southwest Interactive

‘Big Data' Emerges as Key Theme at South by Southwest Interactive
Chronicle of Higher Education (03/15/12) Jeffrey R. Young

Several panels and speakers at this year's South By Southwest Interactive festival discussed the growing ability to use data-mining techniques to analyze big data to shape political campaigns, advertising, and education. For example, panelist and Microsoft researcher Jaron Lannier says companies that rely on selling information about their users' behavior to advertisers should find a way to compensate people for their posts. A panel on education discussed the potential ability of Twitter and Facebook to better connect with students and detect signs that that students might be struggling with certain subjects. "We need to be looking at engagement in this new spectrum, and we haven't," says South Dakota State University social-media researcher Greg Heiberger. Some panels examined the role of big data in the latest presidential campaigns. Although recent presidential campaigns have focused on demographic subgroups, future campaigns may design their messages even more narrowly. "They’re actually going to try targeting groups of individuals so that political campaigns become about data mining" rather than any kind of broad policy message, says University of Texas at Dallas professor David Parry.

Blog: ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence

ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence
PC Magazine (03/15/12) Michael J. Miller

ACM announced that University of California, Los Angeles professor Judea Pearl is this year's winner of the A.M. Turing Award for his work on artificial intelligence. The award, considered the highest honor in computer science, recognizes Pearl for devising a framework for reasoning with imperfect data that has changed the strategy for real-world problem solving. ACM executive director John White says Pearl was singled out for work that "was instrumental in moving machine-based reasoning from the rules-bound expert systems of the 1980s to a calculus that incorporates uncertainty and probabilistic models." Pearl worked out techniques for attempting to reach the best conclusion, even when there is a level of uncertainty. Internet pioneer Vinton Cerf says Pearl's research "is applicable to an extremely wide range of applications in which only partial information is available to draw upon to reach conclusions." He also says the successful business models of companies that search the Internet owe a debt to Pearl's work. Pearl generated the framework for Bayesian networks, which provides a compact method for representing probability distributions. This framework has played a substantial role in reshaping approaches to machine learning, which currently has a heavy reliance on probabilistic and statistical inference, and which underlies most recognition, fault diagnosis, and machine-translation systems.

Wednesday, March 14, 2012

Blog: Mario Is Hard, and That's Mathematically Official

Mario Is Hard, and That's Mathematically Official
New Scientist (03/14/12) Jacob Aron

Massachusetts Institute of Technology (MIT) researchers recently analyzed the computational complexity of video games and found that many of them belong to a class of mathematical problems called NP-hard. The implication is that for a given game level, it can be very tough to determine whether it is possible for a player to reach the end. The results suggest that some hard problems could be solved by playing a game. The researchers, led by MIT's Erik Demaine, converted each game into a Boolean satisfiability problem, which asks whether the variables in a collection of logical statements can be chosen to make all the statements true, or whether the statements inevitably contradict each other. For each game, the team built sections of a level that force players to choose one of two paths, which are equal to assigning variables in the Boolean satisfiability problem. If they permit the completion of a level, that is equivalent to all of the statements in the Boolean problem being true. However, if they make completion impossible, it is equal to a contradiction. Many of the games proved to be NP-hard, which means that deciding whether a player can complete them is at least as difficult as the hardest problems in NP.

Blog: Researchers Send 'Wireless' Message Using Elusive Particles

Researchers Send 'Wireless' Message Using Elusive Particles
University of Rochester News (03/14/12) Peter Iglinski

Researchers at the University of Rochester and North Carolina State University (NCSU) say they have sent a message using a beam of neutrinos. "Using neutrinos, it would be possible to communicate between any two points on Earth without using satellites or cables," says NCSU professor Dan Stancil. Technology that uses neutrinos to send messages enables them to penetrate almost any material they encounter. The researchers used one of the world's most powerful particle accelerators and MINERvA, a multi-ton detector located 100 miles underground. The researchers note that significant work still needs to be done before the technology can be incorporated into a readily usable form. The message was translated into binary code, with the 1's corresponding to a group of neutrinos being fired and the 0's corresponding to no neutrinos being fired. The neutrinos were fired in large groups because they are so evasive that only about one in 10 billion neutrinos are detected. "Neutrinos have been an amazing tool to help us learn about the workings of both the nucleus and the universe, but neutrino communication has a long way to go before it will be as effective," says MINERvA's Deborah Harris.

Blog: Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work

Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work
Baltimore Sun (03/14/12) Meredith Cohn

Johns Hopkins University has 49 mobile health studies underway around the world as part of its Global mHealth Initiative. The initiative aims to evaluate which mobile strategies can aid doctors, community health workers, and consumers in ways equal to traditional methods. Pew Internet & American Life Project's Susannah Fox notes that more than 80 percent of Internet users have looked online for health information. Many of the 40,000 applications already available have practical purposes, such as helping patients adhere to drug regimens, helping people change harmful behaviors, and aiding in weight loss through texts about specific goals and behaviors. There also are pill bottles that send text messages when a person forgets to take their medicine. Meanwhile, mHealth researchers have developed software to help educate medical students, doctors, and other workers about how to care for burn victims. The researchers also have developed apps to train health workers caring for those with HIV and AIDS and to screen and support victims of domestic abuse. "What they all have in common is they increase how often individuals think about their health," says mHealth director Alain B. Labrique. "There is evidence that suggests some apps can have an impact."

Blog Archive