Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, April 18, 2012

Blog: Finding ET May Require Giant Robotic Leap

Finding ET May Require Giant Robotic Leap
Penn State Live (04/18/12) Andrea Elyse Messer

Autonomous, self-replicating robots, known as exobots, are the best way to explore the universe, find and identify extraterrestrial life, and clean up space debris, says Pennsylvania State University professor John D. Mathews. "The basic premise is that human space exploration must be highly efficient, cost effective, and autonomous, as placing humans beyond low Earth orbit is fraught with political, economic, and technical difficulties," Mathews says. Developing and deploying self-replicating robots and advanced communications systems is the only way humans can effectively explore the asteroid belt and beyond, he maintains. The initial robots could be manufactured on the moon, taking advantage of the resources and its low gravity, both of which would reduce costs. The robots must be able to identify their exact location and the location of the other exobots, which would enable them to communicate using an infrared laser beam carrying data. Initially, the exobots would clear existing debris and monitor the more than 1,200 near-Earth asteroids that could be dangerous. In the future, he says a network of exobots could spread throughout the solar system and into the galaxy, using the resources they find there to continue their mission.

Blog: New Julia Language Seeks to Be the C for Scientists

New Julia Language Seeks to Be the C for Scientists
InfoWorld (04/18/12) Paul Krill

Massachusetts Institute of Technology (MIT) researchers have developed Julia, a programming language designed for building technical applications. Julia already has been used for image analysis and linear algebra research. MIT developer Stefan Karpinski notes that Julia is a dynamic language, which he says makes it easier to program because it has a very simple programming model. "One of our goals explicitly is to have sufficiently good performance in Julia that you'd never have to drop down into C," Karpinski adds. Julia also is designed for cloud computing and parallelism, according to the Julia Web page. The programming language provides a simpler model for building large parallel applications via a global distributed address space, Karpinski says. Julia also could be good at handling predictive analysis, modeling problems, and graph analysis problems. "Julia's LLVM-based just-in-time compiler, combined with the language's design, allows it to approach and often match the performance of C and C++," according to the Julia Web page.

Monday, April 16, 2012

Blog: Fast Data hits the Big Data fast lane

Fast Data hits the Big Data fast lane

By Andrew Brust | April 16, 2012, 6:00am PDT
Summary: Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst at Ovum.

By Tony Baer

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Friday, April 13, 2012

Blog: Beyond Turing's Machines

Beyond Turing's Machines
Science (04/13/12) Vol. 336, No. 6078, P. 163 Andrew Hodges

Alan Turing's most profound achievement is arguably the principle of a universal machine that makes logic rather than arithmetic the computer's driving force, writes the University of Oxford's Andrew Hodges. Turing also defined the concept of computability, and suggested that mathematical steps that do not follow rules, and are thus not computable, could be identified with mental intuition. His 1950 treatise presented a basic argument that if the brain's action is computable, then it can be deployed on a computer or universal machine. Turing later suggested that modeling of the human brain might be impossible because of the nature of quantum mechanics, and his view of what is computable has not changed despite the advent of quantum computing. Many thought-experiment models investigate the implications of going beyond the constraints of the computable, and some require that machine elements operate with unlimited speed or permit unrestricted accuracy of measurement. Others more deeply explore the physical world's nature, with a focus of how mental operations relate to the physical brain and the need to rethink quantum mechanics because uncomputable physics is basic to physical law. Hodges says this way of thinking is part of Turing's legacy even though it superficially runs counter to his vision.

Tuesday, April 10, 2012

Blog: Cooperating Mini-Brains Show How Intelligence Evolved

Cooperating Mini-Brains Show How Intelligence Evolved
Live Science (04/10/12) Stephanie Pappas

Trinity College Dublin researchers recently developed computer simulation experiments to determine how human brains evolved intelligence. The researchers created artificial neural networks to serve as mini-brains. The networks were given challenging cooperative tasks and the brains were forced to work together, evolving the virtual equivalent of increased brainpower over generations. "It is the transition to a cooperative group that can lead to maximum selection for intelligence," says Trinity's Luke McNally. The neural networks were programmed to evolve, producing random mutations that can introduce extra nodes into the network. The researchers assigned two games for the networks to play, one that tests how temptation can affect group goals, and one that tests how teamwork can benefit the group. The researchers then created 10 experiments in which 50,000 generations of neural networks played the games. Intelligence was measured by the number of nodes added in each network as the players evolved over time. The researchers found that the networks evolved strategies similar to those seen when humans play the games with other humans. "What this indicates is that in species ancestral to humans, it could have been the transition to more cooperative societies that drove the evolution of our brains," McNally says.

Monday, April 9, 2012

Blog: Transactional Memory: An Idea Ahead of Its Time

Transactional Memory: An Idea Ahead of Its Time
Brown University (04/09/12) Richard Lewis

Brown University researchers were studying theoretical transaction memory technologies, which attempts to seamlessly and concurrently handle shared revisions to information, about 20 years ago. Now those theories have become a reality. Intel recently announced that transactional memory will be included in its mainstream Haswell hardware architecture by next year, and IBM has adopted transactional memory in the Blue Gene/Q supercomputer. The problem that transaction memory aimed to solve is that core processors were changing in fundamental ways, says Brown professor Maurice Herlihy. Herlihy developed a system of requests and permissions in which operations are begun and logged in, but wholesale changes, or transactions, are not made before the system checks to be sure no other thread has suggested changes to the pending transaction as well. If no other changes have been requested, the transaction is consummated, but if there is another change request, the transaction is aborted and the threads start anew. Intel says its transactional memory is "hardware [that] can determine dynamically whether threads need to serialize through lock-protected critical sections, and perform serialization only when required."

Saturday, April 7, 2012

Blog: Bits of Reality

Bits of Reality
Science News (04/07/12) Vol. 181, No. 7, P. 26 Tom Siegfried

Information derived from quantum computing systems could reveal subtle insights about the intersection between mathematics and the physical world. "We hope to be able to verify that these extraordinary computational resources in quantum systems really are part of the way nature behaves," says California Institute of Technology physicist John Preskill. "We could do so by solving a problem that we think is hard classically ... with a quantum computer, where we can easily verify with a classical computer that the quantum computer got the right answer." To solve certain hard problems that standard supercomputers cannot accommodate, such as finding the prime factors of very large numbers, quantum computers must process bits of quantum information. Quantum machines would only be workable for problems that could be posed as an algorithm amenable to the way quantum weirdness can eliminate wrong answers, allowing only the right answer to prevail. In 2011, the Perimeter Institute for Theoretical Physics' Giulio Chiribella and colleagues demonstrated how to derive quantum mechanics from a set of five axioms plus one postulate, all rooted in information theory terms. The foundation of their system is axioms such as causality, the notion that signals from the future cannot impact the present.

Blog: Berkeley Group Digs In to Challenge of Making Sense of All That Data

Berkeley Group Digs In to Challenge of Making Sense of All That Data
New York Times (04/07/12) Jeanne Carstensen

The U.S. National Science Foundation recently awarded $10 million to the University of California, Berkeley's Algorithms Machines People (AMP) Expedition, a research team that takes an interdisciplinary approach to advancing big data analysis. Researchers at the AMP Expedition, in collaboration with researchers at the University of California, San Francisco, are developing a set of open source tools for big data analysis. "We’ll judge our success by whether we build a new paradigm of data," says AMP Expedition director Michael Franklin. “It’s easier to collect data, and harder to make sense of it.” The grant is part of the Obama administration's "Big Data Research and Development Initiative," which will eventually distribute a total of $200 million. AMP Expedition faculty member Ken Goldberg has developed Opinion Space, a tool for online discussion and brainstorming that uses algorithms and data visualization tools to help gather meaningful ideas from a large number of participants. Goldberg notes that part of their research focus is analyzing how people interact with big data. “We recognize that humans do play an important part in the system,” he says.

Tuesday, April 3, 2012

Blog: Programming Computers to Help Computer Programmers

Programming Computers to Help Computer Programmers
Rice University (04/03/12) Jade Boyd

Computer scientists from Rice University will participate in a project to create intelligent software agents that help people write code faster and with fewer errors. The Rice team will focus on robotic applications and how to verify that synthetic, computer-generated code is safe and effective, as part of the effort to develop automated program-synthesis tools for a variety of uses. "Programming is now done by experts only, and this needs to change if we are to use robots as helpers for humans," says Rice professor Lydia Kavraki. She also stresses that safety is critical. "You can only have robots help humans in a task--any task, whether mundane, dangerous, precise, or expensive--if you can guarantee that the behavior of the robot is going to be the expected one." The U.S. National Science Foundation is providing a $10 million grant to fund the five-year initiative, which is based at the University of Pennsylvania. Computer scientists at Rice and Penn have proposed a grand challenge robotic scenario of providing hospital staff with an automated program-synthesis tool for programming mobile robots to go from room to room, turn off lights, distribute medications, and remove medical waste.

Monday, April 2, 2012

Blog: To Convince People, Come at Them From Different Angles

To Convince People, Come at Them From Different Angles
Cornell Chronicle (04/02/12) Bill Steele

Cornell research on Facebook users' behavior demonstrates that people base decisions on the variety of social contexts rather than on the number of requests received. Social scientists previously envisioned the spread of ideas as similar to the spread of disease, but Cornell professor Jon Kleinberg says social contagion seems to be distinct from that model. "Each of us is sitting at a crossroads between the social circles we inhabit," he observes. "When a message comes at you from several directions, it may be more effective." The researchers worked with a database of 54 million email invitations from Facebook users inviting others to join the social network and analyzed the friendship links among inviters. The probability of a person joining increased with the number of different, unconnected social contexts represented. An analysis of the Facebook neighborhoods of 10 million new members seven days after joining identified clumps of friends linked to one another but not as much to people in other clumps. A follow-up check three months later found that people with more diverse clumps among their friends were more likely to be engaged. The researchers imply that mathematical models of how ideas proliferate across networks may require tweaking to account for the inclusion of neighborhood diversity.

Blog: Self-Sculpting Sand

Self-Sculpting Sand
MIT News (04/02/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers are developing a type of reconfigurable robotic system called smart sand. The individual sand grains pass messages back and forth and selectively attach to each other to form a three-dimensional object. MIT professor Daniela Rus says the biggest challenge in developing the smart sand algorithm is that the individual grains have very few computational resources. The grains first pass messages to each other to determine which have missing neighbors. Those with missing neighbors are either on the perimeter of the pile or the perimeter of the embedded shape. Once the grains surrounding the embedded shape identify themselves, they pass messages to other grains a fixed distance away. When the perimeter of the duplicate is established, the grains outside it can disconnect from their neighbors. The researchers built cubes, or “smart pebbles,” to test their algorithm. The cubes have four faces studded with electropermanent magnets, materials that can be magnetized or demagnetized with a single magnetic pulse. The grains use the magnets to connect to each other, to communicate, and to share power. Each grain also is equipped with a microprocessor that can store 32 kilobytes of code and has two kilobytes of working memory.

Blog: UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence

UMass Amherst Computer Scientist Leads the Way to the Next Revolution in Artificial Intelligence
University of Massachusetts Amherst (04/02/12) Janet Lathrop

University of Massachusetts Amherst researchers are translating the "Super-Turing" computation into an adaptable computational system that learns and evolves, using input from the environment the same way human brains do. The model "is a mathematical formulation of the brain’s neural networks with their adaptive abilities," says Amherst computer scientist Hava Siegelmann. When the model is installed in a new environment, the new Super-Turing model results in an exponentially greater set of behaviors than the classical computer or the original Turing model. The researchers say the new Super-Turing machine will be flexible, adaptable, and economical. "The Super-Turing framework allows a stimulus to actually change the computer at each computational step, behaving in a way much closer to that of the constantly adapting and evolving brain," Siegelmann says.

Friday, March 30, 2012

Blog: Honeycombs of Magnets Could Lead to New Type of Computer Processing

Honeycombs of Magnets Could Lead to New Type of Computer Processing
Imperial College London (03/30/12) Simon Levey

Imperial College London researchers say they have developed a new material using nano-sized magnets that could lead to unique types of electronic devices with much greater processing capacity than current technologies. The researchers have shown that a honeycomb pattern of nano-sized magnets introduces competition between neighboring magnets and reduces the problems caused by these interactions by 66 percent. The researchers also found that large arrays of these nano-magnets can be used to store computable information. The research suggests that a cluster of many magnetic domains could be able to solve a complex computational problem in a single calculation. "Our philosophy is to harness the magnetic interactions, making them work in our favor," says Imperial College London researcher Will Branford. Previous studies have shown that external magnetic fields can cause the magnetic domain of each bar to ichange state, which affects the interaction between that bar and its two neighboring bars in the honeycomb. It is this pattern of magnetic states that could be computer data, according to Branford. "This is something we can take advantage of to compute complex problems because many different outcomes are possible, and we can differentiate between them electronically," he says.

Blog: Engineers Rebuild HTTP as a Faster Web Foundation

Engineers Rebuild HTTP as a Faster Web Foundation
CNet (03/30/12) Stephen Shankland

At the recent meeting of the Internet Engineering Task Force, the working group overseeing the Hypertext Transfer Protocol (HTTP) formally opened a discussion about how to make the technology faster. The discussion included Google's SPDY technology and Microsoft's HTTP Speed+Mobility technology. Google's system prefers a required encryption, while Microsoft's preference is for it to be optional. Despite this and other subtle differences, there are many similarities between the two systems. "There's a lot of overlap [because] there's a lot of agreement about what needs to be fixed," says Greenbytes' Julian Reschke. SPDY already is built into Google Chrome and Amazon Silk, and Firefox is planning on adopting it soon. In addition, Google, Amazon, and Twitter are using SPDY on their servers. "If we do choose SPDY as a starting point, that doesn't mean it won't change," says HTTP Working Group chairman Mark Nottingham. SPDY's technology is based on sending multiple streams of data over a single network connection. SPDY also can assign high or low priorities to Web page resources being requested from a server. One difference between the Google and Microsoft proposals is in syntax, but SPDY developers are flexible on the choice of compression technology, says SPDY co-creator Mike Belshe.

Blog: Sanjeev Arora Named Winner of 2011 ACM-Infosys Award

Sanjeev Arora Named Winner of 2011 ACM-Infosys Award
CCC Blog (03/30/12) Erwin Gianchandani

Princeton University professor Sanjeev Arora has received the 2011 ACM-Infosys Foundation Award in Computing Sciences for his contributions to computational complexity, algorithms, and optimization. "Arora’s research revolutionized the approach to essentially unsolvable problems that have long bedeviled the computing field, the so-called NP-complete problems," according to an ACM-Infosys press release. Arora is an ACM Fellow and won the Gödel Prize in 2001 and 2010, as well as the ACM Doctoral Dissertation Award in 1995. Arora also is the founding director of Princeton's Center for Computational Intractability, which addresses the phenomenon that many problems seem inherently impossible to solve on currently computational models. "With his new tools and techniques, Arora has developed a fundamentally new way of thinking about how to solve problems,” says ACM President Alain Chesnais. “In particular, his work on the PCP theorem is considered the most important development in computational complexity theory in the last 30 years. He also perceived the practical applications of his work, which has moved computational theory into the realm of real world uses.” The ACM-Infosys Foundation Award recognizes personal contributions by young scientists and system developers to a contemporary innovation and includes a $175,000 prize.

Wednesday, March 28, 2012

Blog: Google Launches Go Programming Language 1.0

Google Launches Go Programming Language 1.0
eWeek (03/28/12) Darryl K. Taft

Google has released version 1.0 of its Go programming language, which was initially introduced as an experimental language in 2009. Google has described Go as an attempt to combine the development speed of working in a dynamic language such as Python with the performance and safety of a compiled language such as C or C++. "We're announcing Go version 1, or Go 1 for short, which defines a language and a set of core libraries to provide a stable foundation for creating reliable products, projects, and publications," says Google's Andrew Gerrand. He notes that Go 1 is the first release of Go that is available in supported binary distribution, identifying Linux, FreeBSD, Mac OS X, and Windows. Stability for users was the driving motivation for Go 1, and much of the work needed to bring programs up to the Go 1 standard can be automated with the go fix tool. A complete list of changes to the language and the standard library, documented in the Go 1 release notes, will be an essential reference for programmers who are migrating code from earlier versions of Go. There also is a new release of the Google App Engine SDK.

Tuesday, March 27, 2012

Blog: Algorithm Spells the End for Professional Musical Instrument Tuners

Algorithm Spells the End for Professional Musical Instrument Tuners
Technology Review (03/27/12)

University of Wurzburg researcher Hay Hinrichsen says he has developed an algorithm that makes it possible for electronic tuners to match the performance of the best human tuners. Hinrichsen's algorithm involves a process known as entropy minimization. First, Hinrichsen uses the equal temperament method and then divides the audio spectrum with a resolution that matches the human ear. The method then measures the entropy in the system and applies a small random change to the frequency of a note and measures the entropy again. Hinrichsen says the algorithm is comparable to the work of a professional tuner. He notes that the software can be added to the features of relatively inexpensive electronic tuners. "The implementation of the method is very easy," Hinrichsen says.

Blog: Google Working on Advanced Web Engineering

Google Working on Advanced Web Engineering
InfoWorld (03/27/12) Joab Jackson

Google is developing several advanced programming technologies to ease complex Web application development. "We're getting to the place where the Web is turning into a runtime integration platform for real components," says Google researcher Alex Russell. He says one major shortcoming of the Web is that technologies do not have a common component model, which slows code testing and reuse. Google wants to introduce low-level control elements without making the Web stack more confusing for novices. Google's efforts include creating a unified component model, adding classes to JavaScript, and creating a new language for Web applications. By developing a unified component model for Web technologies, Google is setting the stage for developers to "create new instances of an element and do things with it," Russell says. Google engineers also are developing a proposal to add classes to the next version of JavaScript. "We're getting to the place where we're adding shared language for things we're already doing in the platform itself," Russell says. Google also is developing a new language called Dart, which aims to provide an easy way to create small Web applications while providing the support for large, complex applications as well, says Google's Dan Rubel.

Monday, March 26, 2012

Blog: Robots to Organise Themselves Like a Swarm of Insects

Robots to Organise Themselves Like a Swarm of Insects
The Engineer (United Kingdom) (03/26/12)

A swarm of insects is the inspiration for a warehouse transport system that makes use of autonomous robotic vehicles. Researchers at the Fraunhofer Institute for Material Flow and Logistics (IML) have developed autonomous Multishuttle Moves vehicles to organize themselves like insects. The team is testing 50 shuttles at a replica warehouse. When an order is received, the shuttles communicate with one another via a wireless Internet connection and the closest free vehicle takes over and completes the task. "We rely on agent-based software and use ant algorithms based on the work of [swarm robotics expert] Marco Dorigo," says IML's Thomas Albrecht. The vehicles move around using a hybrid sensor concept based on radio signals, distance and acceleration sensors, and laser sensors to calculate the shortest route to any destination and avoid collisions. Albrecht says the system is more flexible and scalable because it can be easily adapted for smaller or larger areas based on changes in demand. "In the future, transport systems should be able to perform all of these tasks autonomously, from removal from storage at the shelf to delivery to a picking station," says IML professor Michael ten Hompel.

Wednesday, March 21, 2012

Blog: Computer Model of Spread of Dementia Can Predict Future Disease Patterns Years Before They Occur in a Patient

Computer Model of Spread of Dementia Can Predict Future Disease Patterns Years Before They Occur in a Patient
Cornell News (03/21/12) Richard Pietzak

Weill Cornell Medical College researchers have developed software that tracks the manner in which different forms of dementia spread within a human brain. The model can be used to predict where and when a person's brain will suffer from the spread of toxic proteins, a process that underlies all forms of dementia. The findings could help patients and their families confirm a diagnosis of dementia and prepare in advance for future cognitive declines over time. "Our model, when applied to the baseline magnetic resonance imaging scan of an individual brain, can similarly produce a future map of degeneration in that person over the next few years or decades," says Cornell's Ashish Raj. The computational model validates the idea that dementia is caused by proteins that spread through the brain along networks of neurons. Raj says the program models the same process by which any gas diffuses in air, except that in the case of dementia, the diffusion process occurs along connected neural fiber tracts in the brain. "While the classic patterns of dementia are well known, this is the first model to relate brain network properties to the patterns and explain them in a deterministic and predictive manner," he says.

Thursday, March 15, 2012

Blog: 'Big Data' Emerges as Key Theme at South by Southwest Interactive

‘Big Data' Emerges as Key Theme at South by Southwest Interactive
Chronicle of Higher Education (03/15/12) Jeffrey R. Young

Several panels and speakers at this year's South By Southwest Interactive festival discussed the growing ability to use data-mining techniques to analyze big data to shape political campaigns, advertising, and education. For example, panelist and Microsoft researcher Jaron Lannier says companies that rely on selling information about their users' behavior to advertisers should find a way to compensate people for their posts. A panel on education discussed the potential ability of Twitter and Facebook to better connect with students and detect signs that that students might be struggling with certain subjects. "We need to be looking at engagement in this new spectrum, and we haven't," says South Dakota State University social-media researcher Greg Heiberger. Some panels examined the role of big data in the latest presidential campaigns. Although recent presidential campaigns have focused on demographic subgroups, future campaigns may design their messages even more narrowly. "They’re actually going to try targeting groups of individuals so that political campaigns become about data mining" rather than any kind of broad policy message, says University of Texas at Dallas professor David Parry.

Blog: ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence

ACM Awards Judea Pearl the Turing Award for Work on Artificial Intelligence
PC Magazine (03/15/12) Michael J. Miller

ACM announced that University of California, Los Angeles professor Judea Pearl is this year's winner of the A.M. Turing Award for his work on artificial intelligence. The award, considered the highest honor in computer science, recognizes Pearl for devising a framework for reasoning with imperfect data that has changed the strategy for real-world problem solving. ACM executive director John White says Pearl was singled out for work that "was instrumental in moving machine-based reasoning from the rules-bound expert systems of the 1980s to a calculus that incorporates uncertainty and probabilistic models." Pearl worked out techniques for attempting to reach the best conclusion, even when there is a level of uncertainty. Internet pioneer Vinton Cerf says Pearl's research "is applicable to an extremely wide range of applications in which only partial information is available to draw upon to reach conclusions." He also says the successful business models of companies that search the Internet owe a debt to Pearl's work. Pearl generated the framework for Bayesian networks, which provides a compact method for representing probability distributions. This framework has played a substantial role in reshaping approaches to machine learning, which currently has a heavy reliance on probabilistic and statistical inference, and which underlies most recognition, fault diagnosis, and machine-translation systems.

Wednesday, March 14, 2012

Blog: Mario Is Hard, and That's Mathematically Official

Mario Is Hard, and That's Mathematically Official
New Scientist (03/14/12) Jacob Aron

Massachusetts Institute of Technology (MIT) researchers recently analyzed the computational complexity of video games and found that many of them belong to a class of mathematical problems called NP-hard. The implication is that for a given game level, it can be very tough to determine whether it is possible for a player to reach the end. The results suggest that some hard problems could be solved by playing a game. The researchers, led by MIT's Erik Demaine, converted each game into a Boolean satisfiability problem, which asks whether the variables in a collection of logical statements can be chosen to make all the statements true, or whether the statements inevitably contradict each other. For each game, the team built sections of a level that force players to choose one of two paths, which are equal to assigning variables in the Boolean satisfiability problem. If they permit the completion of a level, that is equivalent to all of the statements in the Boolean problem being true. However, if they make completion impossible, it is equal to a contradiction. Many of the games proved to be NP-hard, which means that deciding whether a player can complete them is at least as difficult as the hardest problems in NP.

Blog: Researchers Send 'Wireless' Message Using Elusive Particles

Researchers Send 'Wireless' Message Using Elusive Particles
University of Rochester News (03/14/12) Peter Iglinski

Researchers at the University of Rochester and North Carolina State University (NCSU) say they have sent a message using a beam of neutrinos. "Using neutrinos, it would be possible to communicate between any two points on Earth without using satellites or cables," says NCSU professor Dan Stancil. Technology that uses neutrinos to send messages enables them to penetrate almost any material they encounter. The researchers used one of the world's most powerful particle accelerators and MINERvA, a multi-ton detector located 100 miles underground. The researchers note that significant work still needs to be done before the technology can be incorporated into a readily usable form. The message was translated into binary code, with the 1's corresponding to a group of neutrinos being fired and the 0's corresponding to no neutrinos being fired. The neutrinos were fired in large groups because they are so evasive that only about one in 10 billion neutrinos are detected. "Neutrinos have been an amazing tool to help us learn about the workings of both the nucleus and the universe, but neutrino communication has a long way to go before it will be as effective," says MINERvA's Deborah Harris.

Blog: Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work

Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work
Baltimore Sun (03/14/12) Meredith Cohn

Johns Hopkins University has 49 mobile health studies underway around the world as part of its Global mHealth Initiative. The initiative aims to evaluate which mobile strategies can aid doctors, community health workers, and consumers in ways equal to traditional methods. Pew Internet & American Life Project's Susannah Fox notes that more than 80 percent of Internet users have looked online for health information. Many of the 40,000 applications already available have practical purposes, such as helping patients adhere to drug regimens, helping people change harmful behaviors, and aiding in weight loss through texts about specific goals and behaviors. There also are pill bottles that send text messages when a person forgets to take their medicine. Meanwhile, mHealth researchers have developed software to help educate medical students, doctors, and other workers about how to care for burn victims. The researchers also have developed apps to train health workers caring for those with HIV and AIDS and to screen and support victims of domestic abuse. "What they all have in common is they increase how often individuals think about their health," says mHealth director Alain B. Labrique. "There is evidence that suggests some apps can have an impact."

Monday, March 12, 2012

Blog: Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter

Scientists Tap the Genius of Babies and Youngsters to Make Computers Smarter
UC Berkeley News Center (03/12/12) Yasmin Anwar

University of California, Berkeley researchers are studying how babies, toddlers, and preschoolers learn in order to program computers to think more like humans. The researchers say computational models based on the brainpower of young children could give a major boost to artificial intelligence research. "Children are the greatest learning machines in the universe," says Berkeley's Alison Gopnik. "Imagine if computers could learn as much and as quickly as they do." The researchers have found that children test hypotheses, detect statistical patterns, and form conclusions while constantly adapting to changes. “Young children are capable of solving problems that still pose a challenge for computers, such as learning languages and figuring out causal relationships,” says Berkeley's Tom Griffiths. The researchers say computers programmed with children's cognitive abilities could interact more intelligently and responsively with humans in applications such as computer tutoring programs and phone-answering robots. They are planning to launch a multidisciplinary center at the campus' Institute of Human Development to pursue their research. The researchers note that the exploratory and probabilistic reasoning demonstrated by young children could make computers smarter and more adaptable.

Tuesday, March 6, 2012

Blog: W3C CEO Calls HTML5 as Transformative as Early Web

W3C CEO Calls HTML5 as Transformative as Early Web
Computerworld Canada (03/06/12) Shane Schick

World Wide Web Consortium CEO Jeff Jaffe says HTML5 will be among the most disruptive elements to hit organizations since the early days of the Internet. "We’re about to experience a generational change in Web technology, and just as the Web transformed every business, [HTML5] will lead to another transformation," Jaffe says. HTML5 features cross-browser capability, improved data integration, and a better way of handling video. Jaffe says HTML5 makes Web pages "more beautiful [and] intelligent," and also provides for improved accessibility for disabled users. “It won’t really be a standard until 2014, but in the Web ecosystem, nobody waits,” he says. “They’ll make minor adjustments once the standard is done.” For example, TeamLab recently launched the TeamLab Document Editor, an online word processing program. Document Editor uses Canvas, a part of HTML5 that allows for dynamic, scriptable rendering of two-dimensional shapes and bitmap images. Jaffe says HTML5 could benefit a range of industries, including retail, air travel, and the automotive industry.

Friday, March 2, 2012

Blog: Simulations and Mathematics Suggest That There Always Be a Facebook

Simulations and Mathematics Suggest That There Always Be a Facebook
National Center for Nuclear Research (03/02/12)

National Center for Nuclear Research (NCBJ) scientists are conducting research that could lead to the development of a field of mathematics focused on the theory of minority games. Minority games can be used to model social behavior patterns and reactions to financial markets, to optimize utilization of power distribution networks, and to analyze and manage road traffic. "Results obtained in many computer simulations done by us are not just interesting; we have also found some analytical expression to describe them," says NCBJ professor Wojciech Wislicki. Contrary to classical games, in minority games players do not know everything about the game and are reasoning inductively on the basis of their experience, a situation that more closely resembles reality. "The rules seem simple, but behavior of many agents governed by the rules exhibits very complex dynamics," notes NCBJ researcher Karol Wawrzyniak. The researchers also demonstrated how to use minority games theory to forecast winning moves by investigating the dependency of forecast accuracy on the number of participating players. They say groups in which players have transferred their individual strategies to one leader achieve the largest success.

Thursday, February 23, 2012

Blog: Microsoft to developers - Here's how touch is supposed to work

By Adrian Kingsley-Hughes | February 23, 2012, 12:34am PST
Summary: Good news for users … unless they’re a south paw.
Microsoft has released a document that tells developers how touch should work for Metro UI apps on Windows 8 systems.
While a lot of what’s in the four-page PDF document is common sense, there’s also some interesting research contained in the document. For example, the document highlights the best areas on a tablet screen for interaction and reading:

Friday, February 17, 2012

Blog: IBM Says Future Computers Will Be Constant Learners

IBM Says Future Computers Will Be Constant Learners
IDG News Service (02/17/12) Joab Jackson

Tomorrow's computers will constantly improve their understanding of the data they work with, which will help them provide users with more appropriate information, predicts IBM fellow David Ferrucci, who led the development of IBM's Watson artificial intelligence technology. Computers in the future "will not necessarily require us to sit down and explicitly program them, but through continuous interaction with humans they will start to understand the kind of data and the kind of computation we need," according to Ferrucci. He says the key to the Watson technology is that it queries both itself and its users for feedback on its answers. "As you use the system, it will follow up with you and ask you questions that will help improve its confidence of its answer," Ferrucci notes. IBM is now working with Columbia University researchers to adapt Watson so it can offer medical diagnosis and treatment. Watson could serve as a diagnostic assistant and offer treatment plans, says Columbia professor Herbert Chase. Watson also could find clinical trials for the patient to participate in. "Watson has bridged the information gap, and its potential for improving health care and reducing costs is immense," Chase says.

Wednesday, February 8, 2012

Blog: Weave Open Source Data Visualization Offers Power, Flexibility

Weave Open Source Data Visualization Offers Power, Flexibility
Computerworld (02/08/12) Sharon Machlis

The open source Weave project is a platform designed to make it easier for government agencies, nonprofits, and corporate users to offer the public a way to analyze data. The platform enables users to simultaneously highlight items on multiple visualizations, including map, map legend, bar chart, and scatter plot. The benefits of Weave's interactivity go beyond the visual appeal of selecting an area on a chart and seeing matches highlighted on a map, notes Connecticut Data Collaborative project coordinator James Farnam. Weave aims to help organizations democratize data visualization tools, creating a way for anyone interested in a topic to explore and analyze information about it, instead of leaving the task solely to computer and data specialists, says Georges G. Grinstein, director of the University of Massachusetts at Lowell's Institute for Visualization and Perception Research, which created Weave. "Now [you're] engaging the public in a dialog with the data," Grinstein says. "That's why Weave is open source and free." Weave is so powerful that one of the challenges of implementing it is how to narrow down its offerings so that end users would not be overwhelmed with too many options, says the Metropolitan Area Planning Council's Holly St. Clair.

Friday, January 27, 2012

Blog: New Center Developing Computational Bioresearch Tool

New Center Developing Computational Bioresearch Tool
University of Chicago (01/27/12) Steve Koppes

University of Chicago researchers led by professor Gregory Voth are developing a technique that might lead to a new and simpler way to predict molecular motion inside a cell. The research is backed by a $1.5 million grant from the U.S. National Science Foundation (NSF), which is being used to launch the Center for Multiscale Theory and Simulation. "What's impressive about Greg's team is the variety of theoretical and computational tools that it brings to bear," says NSF's Katharine Covert. The tools include a theoretical and computer simulation capability for describing biological systems at interconnected multiple scales. "This is what we call the multi-scale problem, and probably nowhere in the natural world does the multi-scale problem manifest as dramatically as in the biology regime," Voth says. The center will use an extensive new cyberinfrastructure network, which will provide a wide range of computational equipment, software, and techniques to support its work. One of the center’s most important computational tools is a technique called coarse-graining, which is a way of simplifying a complex problem in a mathematically precise way, with real-world physics built in.

Tuesday, January 24, 2012

Blog: The Mathematics of Taste

The Mathematics of Taste
MIT News (01/24/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers used genetic programming, in which mathematical models compete with each other to fit the available data and then cross-pollinate to produce more accurate models, to analyze taste-test data. Swiss flavor company Givaudan asked researchers at MIT's Computer Science and Artificial Intelligence Laboratory (CSAIL) to help interpret the results of taste tests in which 69 subjects assessed 36 different combinations of seven basic flavors. For each subject, the researchers randomly generated a mathematical function that predicted scores according to the concentrations of different flavors. After all of the functions were assessed, the best ones were recombined to produce a new generation of functions, and the whole process was repeated about 30 times. To establish the model's accuracy, the CSAIL researchers developed another model to validate their approach. Taste preference "is a pretty brilliant area in which to apply the evolutionary methods--and it looks as though they're working, also, so that's exciting," says Hampshire College professor Lee Spector.

Monday, January 23, 2012

Blog: Twitter Bots Create Surprising New Social Connections

Twitter Bots Create Surprising New Social Connections
Technology Review (01/23/12) Mike Orcutt

A group of freelance Web researchers have created a Twitter bot, called a socialbot, that can fool users into thinking the bots are real people and serve as virtual social connector, accelerating the natural rate of human-to-human communication. The system grew out of the Web Ecology Project, an independent research group focused on studying the structure of social media phenomena. Some of the Web Ecology Project researchers, led by Tim Hwang, created their own organization, called the Pacific Social Architecting Corp., to continue the development of socialbots. In further experiments, the group tracked 2,700 Twitter users, divided into randomly assigned target groups of 300, over 54 days. The first 33 days served as a control period, during which no socialbots were deployed. Then, during the 21-day experimental period, nine bots were activated, one for each target group. On average, each bot gained 62 new followers and received 33 incoming tweets. The researchers also found that there was a 43 percent increase in human-to-human follows, after the socialbots were introduced, compared to the control period.

Blog Archive