Wednesday, December 28, 2011

Blog: Five Open Source Technologies for 2012

Five Open Source Technologies for 2012
IDG News Service (12/28/11) Joab Jackson

Five open source projects could become the basis for new businesses and industries in 2012. Nginx, a Web server program, could become popular due to its ability to easily handle high-volume traffic. Nginx already is used on highly trafficked Web sites, and the next release, due in 2012, will be more pliable for shared hosting environments. The OpenStack cloud computing platform has gained support from several technology firms due to its scalability. "We're not talking about [using OpenStack to run a] cloud of 100 servers or even 1,000 servers, but tens of thousands of servers," says OpenStack Project Policy Board's Jonathan Bryce. Stig was designed for the unique workloads of social networking sites, according to its developers. The data store's architecture allows for inferential searching, enabling users and applications to look for connections between disparate pieces of information. Linux Mint was designed specifically for users who want a desktop operating system and do not want to learn more about how Linux works. The Linux Mint project is now the fourth most popular desktop operating system in the world. GlusterFS is one of the fastest growing storage software systems on the market, as downloads have increased by 300 percent in the last year.

Wednesday, December 21, 2011

Blog: Computer Scientists Create Algorithm That Measures Human Pecking Order

Computer Scientists Create Algorithm That Measures Human Pecking Order
Technology Review (12/21/11)

Cornell University's Jon Kleinberg, who developed the Hyper Induced Topic Search (HITS) algorithm that led to Google's PageRank search algorithm, has developed a method for measuring power differences between individuals using the patterns of words they speak or write. "We show that in group discussions, power differentials between participants are subtly revealed by how much one individual immediately echoes the linguistic style of the person they are responding to," Kleinberg says. The key to the technique is linguistic co-ordination, in which speakers naturally copy the style of the interlocutors. The Cornell researchers focused on functional words that provide a grammatical framework for sentences but do not have real meaning themselves, such as articles, auxiliary verbs, conjunctions and high-frequency adverbs. The researchers studied editorial discussions between Wikipedia editors and transcripts of oral arguments in the U.S. Supreme Court. By looking at the changes in linguistic style that occur when people make the transition from non-admin to admin roles on Wikipedia, the researchers show that the pattern of linguistic co-ordination changes too. A similar effect occurs in the Supreme Court. "Our work is the first to identify connections between language coordination and social power relations at large scales, and across a diverse set of individuals and domains," Kleinberg says.

Sunday, December 11, 2011

Blog: Multi-Purpose Photonic Chip Paves the Way to Programmable Quantum Processors

Multi-Purpose Photonic Chip Paves the Way to Programmable Quantum Processors
University of Bristol News (12/11/11)

University of Bristol researchers have developed an optical chip that generates, manipulates, and measures two quantum phenomena, entanglement and mixture, which are essential for building quantum computers. The researchers showed that entanglement can be generated, manipulated, and measured on a silicon chip. The chip also has been able to measure mixture, which can be used to characterize quantum circuits. "To build a quantum computer, we not only need to be able to control complex phenomena, such as entanglement and mixture, but we need to be able to do this on a chip, so that we can scalably and practically duplicate many such miniature circuits--in much the same way as the modern computers we have today," says Bristol professor Jeremy O'Brien. "Our device enables this and we believe it is a major step forward towards optical quantum computing." The chip consists of a network of tiny channels that guide, manipulate, and interact with single photons. "It’s exciting because we can perform many different experiments in a very straightforward way, using a single reconfigurable chip," says Bristol's Peter Shadbolt. The researchers are now scaling up the complexity of the device for use as a building block for quantum computers.

Thursday, December 8, 2011

Blog: Streamlining Chip Design

Streamlining Chip Design
MIT News (12/08/11) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed a system that enables hardware designers to specify, in a single programming language, all of the functions they want a device to perform. The system allows chip designers to designate which functions should run in hardware and which in software, and the system will automatically produce the corresponding circuit descriptions and computer code. The system is based on BlueSpec, a chip-design language that enables designers to specify a set of rules that the chip must follow and convert those specifications into Verilog code. The MIT researchers expanded the BlueSpec instruction set so that it can describe more elaborate operations that are possible only in software. "What we're trying to give people is a language where they can describe the algorithm once and then play around with how the algorithm is partitioned," says MIT student Myron King.

Wednesday, December 7, 2011

Blog: White House Sets Cybersecurity R&D Priorities

White House Sets Cybersecurity R&D Priorities
InformationWeek (12/07/11) Elizabeth Montalbano

The White House has published a cybersecurity research and development (R&D) roadmap developed by the U.S. Office of Science and Technology Policy. The roadmap, a product of a seven-year effort by both public- and private-sector experts, lists four areas of R&D concentration. The first priority is inducing change by applying game-changing themes toward the comprehension of the underlying reasons for current cybersecurity vulnerabilities, and devising ways to address them by disrupting the status quo. The next research priority focuses on the development of scientific foundations for cybersecurity, including laws, hypothesis testing, repeatable experimental designs, standardized data collection techniques, metrics, and common terminology. The third area of concentration entails facilitating the most comprehensive research impact by ensuring interagency collaboration, coordination, and integration of cybersecurity improvement operations. The final priority is to accelerate the time it takes to practically apply the cybersecurity research. "Given the magnitude and pervasiveness of cyberspace threats to our economy and national security, it is imperative that we fundamentally alter the dynamics in cybersecurity through the development of novel solutions and technologies," says U.S. chief technology officer Aneesh Chopra and White House cybersecurity coordinator Howard Schmidt.

Tuesday, December 6, 2011

Blog: System Would Monitor Feds for Signs They're 'Breaking Bad'

System Would Monitor Feds for Signs They're 'Breaking Bad'
Government Computer News (12/06/11) Kevin McCaney

Georgia Tech researchers, in collaboration with researchers at Oregon State University, the University of Massachusetts, and Carnegie Mellon University, are developing the Proactive Discovery of Insider Threats Using Graph Analysis and Learning (PRODIGAL) system. PRODIGAL is designed to scan up to 250 million text messages, emails, and file transfers to identify insider threats or employees that are about to turn against the organization. The system will integrate graph processing, anomaly detection, and relational machine learning to create a prototype Anomaly Detection at Multiple Scales system. PRODIGAL, which initially would be used to monitor the communications in civilian, government, and military organizations in which employees have agreed to be monitored, is intended to identify rogue individuals, according to the researchers. "Our goal is to develop a system that will provide analysts for the first time a very short, ranked list of unexplained events that should be further investigated," says Georgia Tech professor David Bader.

Monday, December 5, 2011

Blog: Creating Artificial Intelligence Based on the Real Thing

Creating Artificial Intelligence Based on the Real Thing
New York Times (12/05/11) Steve Lohr

Researchers from Cornell University, Columbia University, the University of Wisconsin, the University of California, Merced, and IBM are developing technology based on biological systems. The project recently received $21 million in funding from the U.S. Defense Advanced Research Projects Agency (DARPA), which helped lead to the development of prototype neurosynaptic microprocessors that function more like neurons and synapses than conventional semiconductors. The prototype chip has 256 neuron-like nodes, surrounded by more than 262,000 synaptic memory modules. A computer running the prototype chip has learned how to play the video game Pong and to identify the numbers one through 10 written by a human on a digital pad. The project aims to find designs, concepts, and techniques that might be borrowed from biology to push the limits of computing. The research is "the quest to engineer the mind by reverse-engineering the brain," says IBM's Dharmendra S. Modha. DARPA wants the project to produce technology that is self-organizing, able to learn instead of just responding to programming commands, and run on very little power. "It seems that we can build a computing architecture that is quite general-purpose and could be used for a large class of applications," says Cornell professor Rajit Manohar.

Friday, December 2, 2011

Blog: First Demonstration of Opto-Electronic Reservoir Computing

First Demonstration of Opto-Electronic Reservoir Computing
Technology Review (12/02/11)

Universite Libre de Bruxelles researchers have developed a form of computing that exploits feedback loops to perform extremely fast analog calculations. The researchers found that a nonlinear feedback mechanism is basically an information processor, because it takes a certain input and processes it to generate an output. The feedback loop is a kind of memory that stores information about the system's recent history, making this form of processing an analysis of just a small segment of the recent past. The researchers, led by Yvan Paquot, are working on reservoir computing, which consists of a large number of nodes that are randomly connected. Each node is a kind of non-linear feedback loop, and the inputs are fed into the random nodes in the reservoir, while the outputs are taken from other randomly chosen nodes. The researchers say the reservoir network is similar to a neural network, except the output signals are weighted during training, which makes the process much simpler than with a neural network. "Our experiment is the first implementation of reservoir computing fast enough for real time information processing," Paquot says.

Blog: U.S. Intelligence Group Seeks Machine Learning Breakthroughs

U.S. Intelligence Group Seeks Machine Learning Breakthroughs
Network World (12/02/11) Michael Cooney

The U.S. Intelligence Advanced Research Projects Activity (IARPA) announced that it is looking for new ideas that may become the basis of cutting-edge machine-learning projects. "In many application areas, the amount of data to be analyzed has been increasing exponentially [sensors, audio and video, social network data, Web information], stressing even the most efficient procedures and most powerful processors," according to IARPA. "Most of these data are unorganized and unlabeled and human effort is needed for annotation and to focus attention on those data that are significant." IARPA's request for information asks about proposed methods for the automation of architecture and algorithm selection and combination, feature engineering, and training data scheduling, as well as compelling reasons to use such approaches in a scalable multi-modal analytic system and whether supporting technologies are readily available. IARPA says that innovations in hierarchical architectures such as Deep Belief Nets and hierarchical clustering will be needed for useful automatic machine-learning systems. It wants to identify promising areas for investment and plans to hold a machine learning workshop in March 2012.

Saturday, November 19, 2011

Blog: Google's Search Algorithm Challenged

Google's Search Algorithm Challenged
IDG News Service (11/19/11) Philip Willan

Padua University professor Massimo Marchiori is leading the development of Volunia, a new search engine that could challenge Google's search algorithm and lead to radically different search engines in the future. "It's not just Google plus 10 percent. It's a different perspective," says Marchiori, who contributed to the development of Google's search algorithm. "It's a new radical view of what a search engine of the future could be." Volunia's Web site allows visitors to sign up for a chance to test the beta version of the search engine, which will be launched in 12 languages by the end of the year. "If I didn't think it was something big, capable of competing with the giants of online search, I would never have got involved," Marchiori says. The project is headquartered in Padua, with funding being supplied by Sardinian entrepreneur Mariano Pireddu. "The difference of our search engine is what will enable us to emerge," Marchiori says. Pireddu says the Volunia researchers are not attempting to build a better search engine than Google's, but rather they are trying to create a different kind of search engine that can work alongside Google's.

Thursday, November 17, 2011

Blog: Smart Swarms of Bacteria Inspire Robotics Researchers

Smart Swarms of Bacteria Inspire Robotics Researchers
American Friends of Tel Aviv University (11/17/11)

Tel Aviv University (TAU) researchers have developed a computational model that describes how bacteria move in a swarm, a discovery they say could be applied to computers, artificial intelligence, and robotics. The model shows how bacteria collectively gather information about their environment and find an optimal plan for growth. The research could enable scientists to design smart robots that can form intelligent swarms, help in the development of medical micro-robots, or de-code social network systems to find information on consumer preferences. "When an individual bacterium finds a more beneficial path, it pays less attention to the signals from the other cells, [and] since each of the cells adopts the same strategy, the group as a whole is able to find an optimal trajectory in an extremely complex terrain," says TAU Ph.D. student Adi Shklarsh. The model shows how a swarm can perform optimally with only simple computational abilities and short term memory, Shklarsh says. He notes that understanding the secrets of bacteria swarms can provide crucial hints toward the design of robots that are programmed to perform adjustable interactions without needing as much data or memory.

Wednesday, November 16, 2011

Blog: Squishybots: Soft, Bendy and Smarter Than Ever

Squishybots: Soft, Bendy and Smarter Than Ever
New Scientist (11/16/11) Justin Mullins

A squishy, tentacled configuration may be an accurate design model for future robots, as a rigid humanoid shape is proving impractical for many of the tasks people want robots to perform. A key element of such designs is morphological computing, a discipline that holds that a robot's intelligence can be enhanced through the optimization of its body's interaction with its environment. This represents an inversion of conventional thinking, which dictates that an organism has a central processing capability where intelligence in housed, and its body's interaction with its surroundings demonstrates that intelligence. Using the embodied intelligence approach, researchers in Pisa, Italy, are building a soft, rubbery robot octopus equipped with appendages whose grasping ability exceeds that of the most advanced robots. Another speculative application of morphological computing principles is a soft robot surgeon concept that Kings College London researchers are studying. The robot would enter the body through a natural orifice or incision, pass soft tissues and organs without impediment, and harden once in place. The advantage of the embodied intelligence strategy is that the robots will be ideally suited for the job at hand.

Tuesday, November 15, 2011

Blog: Mimicking the Brain, in Silicon

Mimicking the Brain, in Silicon
MIT News (11/15/11) Anne Trafton

Massachusetts Institute of Technology (MIT) researchers have designed a computer chip that mimics how the brain's neurons adapt in response to new information. The chip uses about 400 transistors to simulate the activity of a single brain synapse, helping neuroscientists learn more about how the brain works, according to MIT researcher Chi-Sang Poon. The researchers designed the chip so that the transistors could emulate the activity of different ion channels. Although most chips operate in a binary system, the new chip functions in an analog fashion. "We now have a way to capture each and every ionic process that's going on in a neuron," Poon says. The new chip represents a "significant advance in the efforts to incorporate what we know about the biology of neurons and synaptic plasticity onto [complementary metal-oxide-semiconductor] chips," says University of California, Los Angeles professor Dean Buonomano. The researchers plan to use the chip to develop systems that model specific neural functions, such as the visual processing system. The chips also could be used to interface with biological systems.

Monday, November 14, 2011

Blog: Walls Have Eyes: How Researchers Are Studying You on Facebook

Walls Have Eyes: How Researchers Are Studying You on Facebook
Time (11/14/11) Sonia van Gilder Cooke

Facebook's trove of personal information is so encyclopedic that researchers are using the site's advertising tool to pinpoint their desired demographic with scientific accuracy, according to a recent Demos report. The report focused on European right-wing extremist groups, and used Facebook's data to find 500,000 fans of right-wing groups across Europe. The researchers linked these Facebook users to a survey that asked questions about their education level, attitudes toward violence, and optimism about their own future. Demos' research is just one example of how Facebook is becoming a popular tool among scientists. There are currently more than 800 million active users adding an average of three pieces of content daily, driving the number of academic papers with the Facebook's name in the title up almost 800 percent over the past five years. Researchers say Facebook's data also could be used to address social health problems. For example, a University of Wisconsin-Madison study found that undergraduates who discussed their drunken exploits on Facebook were significantly more likely to have a drinking problem than those students who did not discuss the topic online.

Friday, November 11, 2011

Blog: HTML5: A Look Behind the Technology Changing the Web

HTML5: A Look Behind the Technology Changing the Web
Wall Street Journal (11/11/11) Don Clark

HTML5 is catching on as the online community embraces it. The programming standard allows data to be stored on a user's computer or mobile device so that Web apps can function without an Internet link. HTML5 also enables Web pages to boast jazzier images and effects, while objects can move on Web pages and respond to cursor movements. Audio is played without a plug-in on HTML5, and interactive three-dimensional effects can be created using a computer's graphics processor via WebGL technology. In addition, video can be embedded in a Web page without a plug-in, and interactive games can operate with just a Web browser without installing other software or plug-ins. Silicon Valley investor Roger McNamee projects that HTML5 will enable artists, media firms, and advertisers to differentiate their Web offerings in ways that were previously impractical. Binvisions.com reports that about one-third of the 100 most popular Web sites used HTML5 in the quarter that ended in September. Google, Microsoft, the Mozilla Foundation, and Opera Software are adding momentum to HTML5 by building support for the standard into their latest Web browsers.

Blog: Stanford Joins BrainGate Team Developing Brain-Computer Interface to Aid People With Paralysis

Stanford Joins BrainGate Team Developing Brain-Computer Interface to Aid People With Paralysis
Stanford University (11/11/11) Tanya Lewis

Stanford University researchers have joined the BrainGate research project, which is investigating the feasibility of people with paralysis using a technology that interfaces directly with the brain to control computer cursors, robotic arms, and other assistive devices. The project is based on technology developed by researchers at Brown and Harvard universities, Massachusetts General Hospital, and the Providence Veterans Affairs Medical Center. BrainGate is a hardware/software-based system that senses electrical signals in the brain that control movement. Computer algorithms translate the signals into instructions that enable users with paralysis to control external devices. "One of the biggest contributions that Stanford can offer is our expertise in algorithms to decode what the brain is doing and turn it into action," says Stanford's Jaimie Henderson. He is working with Stanford professor Krishna Shenoy, who is focusing on understanding how the brain controls movement and translating that knowledge into neural prosthetic systems controlled by software. "The BrainGate program has been a model of innovation and teamwork as it has taken the first giant steps toward turning potentially life-changing technology into a reality," Shenoy says. The researchers recently showed that the system allowed a patient to control a computer cursor more than 1,000 days after implementation.

Wednesday, November 9, 2011

Blog: Technology-induced medical errors: the wave of the future?

Technology-induced medical errors: the wave of the future?
By Denise Amrich, RN | November 9, 2011, 4:45am PST

Summary: Tuesday’s federal report addresses the strong need for safety in health IT without irresponsibly discouraging progress.

Electronic healthcare management is a really fascinating, promising topic, and most of the time, you hear people focusing on the improvements in patient care, as well as cost and time savings, partly because it helps make a case to get healthcare organizations on board with change.

The dark side of the topic is, of course, the less-often discussed and more threatening aspect of safety and security. Sometimes these fears are inflated for shock and horror or PR value. Sometimes they are glossed over. Rarely are they given credence or discussed in a detailed, productive manner. Scant attention has been paid to what harm may come from the widespread IT-ing of healthcare.

Wednesday, November 2, 2011

Blog: Major Breakthrough Improves Software Reliability and Security

Major Breakthrough Improves Software Reliability and Security
Columbia University (11/02/11)

Columbia University researchers have developed Peregrine, software designed to improve the reliability and security of multithreaded computer programs. "Our main finding in developing Peregrine is that we can make threads deterministic in an efficient and stable way: Peregrine can compute a plan for allowing when and where a thread can 'change lanes' and can then place barriers between the lanes, allowing threads to change lanes only at fixed locations, following a fixed order," says Columbia professor Junfeng Yang. "Once Peregrine computes a good plan without collisions for one group of threads, it can reuse the plan on subsequent groups to avoid the cost of computing a new plan for each new group." The researchers say the program gets at the root cause of software problems, enabling Peregrine to address all of the issues that are caused by nondeterminism. They note that Peregrine can handle data races or bugs, is very fast, and works with current hardware and programming languages.

Blog: Socialbots Used by Researchers to 'Steal' Facebook Data

Socialbots Used by Researchers to 'Steal' Facebook Data
BBC News (11/02/11)

University of British Columbia (UBC) researchers were able to collect 46,500 email addresses and 14,500 home addresses from Facebook by using socialbots. Socialbots are a social networking adaptation of botnets that criminals can use to send out spam. The malware takes control of a social networking profile and performs basic activities such as posting messages and sending requests. Over the course of eight weeks, the UBC team used 102 socialbots and one botmaster to attempt to make friends with 8,570 Facebook users, and 3,055 accepted the friendships. Facebook users with the most friends were more likely to accept a socialbot as a friend. The team had the socialbots send only 25 requests a day to keep from triggering Facebook's fraud detection software. "As socialbots infiltrate a targeted online social network, they can further harvest private users' data, such as email addresses, phone numbers, and other personal data that have monetary value," the researchers say.

Tuesday, November 1, 2011

Blog: Researchers Defeat CAPTCHA on Popular Websites

Researchers Defeat CAPTCHA on Popular Websites
IDG News Service (11/01/11) Lucian Constantin

Stanford University researchers have developed an automated tool that can decipher Completely Automated Public Turing tests to tell Computers and Humans Apart (CAPTCHAs), which are used by many Web sites as an anti-spam test. The Stanford team, led by researchers Elie Bursztein, Matthieu Martin, and John C. Mitchel, developed various methods of cleaning up purposefully introduced background noise and breaking text strings into individual characters for easier recognition. Some of the CAPTCHA-breaking algorithms are based on tools used by robots to orient themselves in new environments. The researchers created Decaptcha, which was run against CAPTCHAs used by 15 high-profile Web sites. The only tested site that could not be broken was Google. The researchers also developed several recommendations to improve CAPTCHA security, including randomizing the length of the text string, randomizing the character size, applying a wave-like effect to the output, and using collapsing or lines in the background.

Tuesday, October 25, 2011

Blog: How Revolutionary Tools Cracked a 1700s Code

How Revolutionary Tools Cracked a 1700s Code
New York Times (10/25/11) John Markoff

A cipher dating back to the 18th century that was considered uncrackable was finally decrypted by a team of Swedish and U.S. linguists by using statistics-based translation methods. After a false start, the team determined that the Copiale Cipher was a homophonic cipher and attempted to decode all the symbols in German, as the manuscript was originally discovered in Germany. Their first step was finding regularly occurring symbols that might stand for the common German pair "ch." Once a potential "c" and "h" were found, the researchers used patterns in German to decode the cipher one step at a time. Language translation techniques such as expected word frequency were used to guess a symbol's equivalent in German. However, there are other, more impenetrable ciphers that have thwarted even the translators of the Copiale Cipher. The Voynich manuscript has been categorized as the most frustrating of such ciphers, but one member of the team that cracked the Copiale manuscript, the University of Southern California's Kevin Knight, co-published an analysis of the Voynich document pointing to evidence that it contains patterns that match the structure of natural language.

Monday, October 24, 2011

Blog: XML Encryption Cracked, Exposing Real Threat to Online Transactions

XML Encryption Cracked, Exposing Real Threat to Online Transactions
Government Computer News (10/24/11) William Jackson

Ruhr-University Bochum researchers have demonstrated a technique for breaking the encryption used to secure data in online transactions, posing a serious threat on all currently used implementations of XML encryption. The attack can recover 160 bytes of a plain-text message in 10 seconds and decrypt larger amounts of data at the same pace, according to the researchers. The attack exploits weaknesses in the cipher-block chaining (CBC) mode of operation that is commonly used with many cryptographic algorithms, making it possible to also use the attack against non-XML implementations. "I would not be surprised to see variants of this attack applied to other protocols, when CBC mode is used in similar context," says the World Wide Web Consortium's (W3C's) Thomas Roessler. The researchers recommend fixing existing CBC implementations or developing secure new implementations without changing the XML Encryption standard. Roessler says such a change should be simple because the XML Encryption standard is not specific to any algorithm or mode of operation. He notes that W3C's XML Security Working Group is developing a set of mandatory algorithms used in XML Encryption to include use of only non-CBC modes of operation.

Thursday, October 20, 2011

Blog: RAMCloud: When Disks and Flash Memory Are Just Too Slow

RAMCloud: When Disks and Flash Memory Are Just Too Slow
HPC Wire (10/20/11) Michael Feldman

Stanford University researchers have developed RAMCloud, a scalable, high performance storage approach that can store data in dynamic random access memory and aggregate the memory resources of an entire data center. The researchers say the scalability and performance components make RAMCloud a candidate for high performance computing, especially with those applications that are data-intensive. "If RAMCloud succeeds, it will probably displace magnetic disk as the primary storage technology in data centers," according to the researchers, who are led by professor John Ousterhout. RAMCloud's two most important features are its ability to scale across thousands of servers and its extremely low latency. RAMCloud has a latency that is 1,000 times faster than disk and about five times faster than flash. In addition, the researchers predict that RAMClouds as big as 500 terabytes can be built. Although there is no set timeline to turn RAMCloud into a commercial offering, the researchers do not foresee any technological hurdles.

Wednesday, October 12, 2011

Blog: Cops on the Trail of Crimes That Haven't Happened

Cops on the Trail of Crimes That Haven't Happened
New Scientist (10/12/11) Mellisae Fellet

The Santa Cruz, Calif., police department recently started field-testing Santa Clara University-developed software that analyzes where crime is likely to be committed. The software uses the locations of past incidents to highlight likely future crime scenes, enabling police to target and patrol those areas with the hope that their presence might stop the crimes from happening in the first place. The program, developed by Santa Clara researcher George Mohler, predicted the location and time of 25 percent of burglaries that occurred on any particular day in an area of Los Angeles in 2004 and 2005, using just the data on burglaries that had occurred before that day. The Santa Cruz police department is using the software to monitor 10 areas for residential burglaries, auto burglaries, and auto theft. If the program proves to be effective in thwarting crime in areas that are known for their high crime rates, it can be applied to other cities, says University of California, Los Angeles researcher Jeffrey Brantingham, who collaborated on the algorithm's development.

Tuesday, October 11, 2011

Blog: Father of SSL Says Despite Attacks, the Security Linchpin Has Lots of Life Left

Father of SSL Says Despite Attacks, the Security Linchpin Has Lots of Life Left
Network World (10/11/11) Tim Greene

Despite high-profile exploits, secure sockets layer/transport layer security (SSL/TLS), the protocol that safeguards e-commerce security, can remain viable through proper upgrades as it becomes necessary, says SSL co-creator Taher Elgamal in an interview. He says the problem is not rooted in SSL/TLS itself, but rather in the surrounding trust framework and the problems it causes when it comes time to patch the protocol to correct vulnerabilities. "If there is a way that we can separate who we trust from the vendor of the browsers, then that would be the best thing to do," Elgamal notes. "And the root of the trust should be the Internet with its built-in reputation ecosystem." Elgamal says that in such a scenario, if people were to notice that a specific certificate authority is issuing bad certificates, then the reputation would jettison it immediately with no need to issue patches. What is needed is the construction of an automatic update mechanism, and Elgamal believes the technology to facilitate self-updating exists. "I hope people look for these things because honestly, every protocol will have roles for self-updating things," he notes. "Nothing will remain secure forever."

Blog: "Ghostwriting" the Torah?

"Ghostwriting" the Torah?
American Friends of Tel Aviv University (10/11/11)

Tel Aviv University (TAU) researchers have developed a computer algorithm that could help identify the different sources that contributed to the individual books of the Bible. The algorithm, developed by TAU professor Nachum Dershowitz, recognizes linguistic cues, such as word preference, to divide texts into probable author groupings. The researchers focused on writing style instead of subject or genre to avoid some of the problems that have vexed Bible scholars in the past, such as a lack of objectivity and complications caused by the multiple genres and literary forms found in the Bible. The software searches for and compares details that human scholars might have difficulty detecting, such as the frequency of the use of function words and synonyms, according to Dershowitz. The researchers tested the software by randomly mixing passages from the Hebrew books of Jeremiah and Ezekiel, and instructing the computer to separate them. The program was able to separate the passages with 99 percent accuracy, in addition to separating "priestly" materials from "non-priestly" materials. "If the computer can find features that Bible scholars haven't noticed before, it adds new dimensions to their scholarship," Dershowitz says.

Monday, October 10, 2011

Blog: Google Launches Dart as a JavaScript Killer

Google Launches Dart as a JavaScript Killer
IDG News Service (10/10/11) Joab Jackson

Google announced the launch of a preview version of Dart, an object-oriented Web programming language that has capabilities that resemble those of JavaScript but also addresses some of its scalability and organizational shortcomings. Google software engineer Lars Bak describes Dart as "a structured yet flexible language for Web programming." Dart is designed to be used for both quickly cobbling together small projects as well as for developing larger-scale Web applications. Programmers will be able to add variables without defining their data type or to define their data types. A compiler and a virtual machine, along with a set of basic libraries, are part of the preview version. Initially, programmers will have to compile their Dart creations to JavaScript, using a tool included in the Dart package, to get them to run on browsers. However, Google would like future Web browser to include a native Dart virtual machine for running Dart programs.

Sunday, September 25, 2011

Blog: Proton-Based Transistor Could Let Machines Communicate With Living Things

Proton-Based Transistor Could Let Machines Communicate With Living Things
UW News (09/20/11) Hannah Hickey

Researchers at the University of Washington have developed a transistor that uses protons, instead of electrons, to send information, which could enable electronic devices to communicate directly with living things. "We found a biomaterial that is very good at conducting protons, and allows the potential to interface with living systems," says Washington professor Marco Rolandi. A machine that was compatible with a living system could monitor body processes such as flexing muscles and transmitting brain signals. The prototype device is a field-effect transistor, a drain and source terminal for the current. "In our device, large bio-inspired molecules can move protons, and a proton current can be switched on and off, in a way that's completely analogous to an electronic current in any other field-effect transistor," Rolandi says. The device uses a modified form of the compound chitosan, originally extracted from squid pen, because it works very well at moving protons by absorbing water and forming many hydrogen bonds that the protons are able to easily move between. "So we now have a protonic parallel to electronic circuitry that we actually start to understand rather well," Rolandi says.

Friday, September 23, 2011

Blog: New Mathematical Model to Enable Web Searches for Meaning

New Mathematical Model to Enable Web Searches for Meaning
University of Hertfordshire (09/23/11) Paige Upchurch

University of Hertfordshire computer scientist Daoud Clarke has developed a mathematical model based on a theory of meaning that could revolutionize artificial intelligence technologies and enable Web searches to interpret the meaning of queries. The model is based on the idea that the meaning of words and phrases is determined by the context in which they occur. "This is an old idea, with its origin in the philosophy of Wittgenstein, and was later taken up by linguists, but this is the first time that someone has used it to construct a comprehensive theory of meaning," Clarke says. The model provides a way to represent words and phrases as sequences of numbers, known as vectors. "Our theory tells you what the vector for a phrase should look like in terms of the vectors for the individual words that make up the phrase," Clarke says. "Representing meanings of words using vectors allows fuzzy relationships between words to be expressed as the distance or angle between the vectors." He says the model could be applied to new types of artificial intelligence, such as determining the exact nature of a particular Web query.

Wednesday, September 21, 2011

Blog: Robotics Team Finds Artificial Fingerprints Improve Tactile Abilities

Robotics Team Finds Artificial Fingerprints Improve Tactile Abilities
PhysOrg.com (09/21/11) Bob Yirka

National University of Singapore researchers have demonstrated how adding artificial fingerprints to robot fingers can increase tactile sensation, enabling the robot to discern the differences in the curvature of objects. The researchers, led by Saba Salehi, John-John Cabibihan, and Shuzhi Sam Ge, built a touch sensor consisting of a base plate, embedded sensors, and a raised ridged surface. The researchers tested the sensor in a variety ways to determine if they were able to use it to sense things in different ways, specifically as it was applied to flat, edged, and curved objects. The researchers found that the raised sensor provided more feedback information than the one with the flat surface, so much so that they were able to tell the difference in the three types of objects with 95.7 percent accuracy.

Blog: Novel High-Performance Hybrid System for Semantic Factoring of Graph Databases

Novel High-Performance Hybrid System for Semantic Factoring of Graph Databases
Pacific Northwest National Laboratory (09/21/11) Kathryn Lang; Christine Novak

Researchers at the Pacific Northwest National Laboratory, Sandia National Laboratories, and Cray have developed an application that can analyze gigabyte-sized data sets. The application uses semantic factoring to organize data, revealing hidden connections and threads. The researchers used the application to analyze the massive datasets for the Billion Triple Challenge, an international competition aimed at demonstrating capability and innovation for dealing with very large semantic graph databases. The researchers utilized the Cray XMT architecture, which allowed all 624 gigabytes of input data to be held in RAM. The researchers are now developing a prototype that can be adapted to a variety of application domains and datasets, including working with the bio2rdf.org and future billion-triple-challenge datasets in prototype testing and evaluation.

Monday, September 19, 2011

Blog: Mining Data for Better Medicine

Mining Data for Better Medicine
Technology Review (09/19/11) Neil Savage

Researchers are utilizing digital medical records to conduct wide-ranging studies on the effects of certain drugs and how they relate to different populations. Data-mining studies also are being used to uncover evidence of economic problems, such as overbilling and unnecessary procedures. In addition, some large hospital systems are employing full-time database research teams to study electronic records. Stanford University researcher Russ Altman is developing tools to analyze the U.S. Food and Drug Administration's Adverse Event Reporting System, a database containing several million reports of drugs that have harmed patients. The Stanford researchers have developed an algorithm that searched for patients taking widely prescribed drugs who subsequently suffered side effects similar to those seen in diabetics. "There's just an incredibly wide range of possibilities for research from using all this aggregated data," says Margaret Anderson, executive director of FasterCures, a think tank in Washington, D.C. "We're asking, 'Why aren't we paying a little bit more attention to that?'"

Friday, September 16, 2011

Blog: Data May Not Compute

Data May Not Compute
Harvard Gazette (09/16/11) Alvin Powell

The fast pace of technology's advance has left some data behind as data stored on tapes, floppy disks, and other media that is now unreadable by modern computers is essentially lost. In addition, file formats change as new programs are developed, making older programs obsolete. To help save this lost data, Harvard University's Institute for Quantitative Social Science (IQSS) is leading the Dataverse Network Project, which provides archival storage for scientific research projects. IQSS provides professional archiving standards designed to ensure future access to data. Once a researcher's data is entered into the system, it is converted from its original file format into a basic one that ensures the information will remain readable for decades. When that format becomes obsolete, the system will automatically convert the data to a new format that also is designed to last for decades, says IQSS director Gary King. The institute currently hosts more than 350 individual researchers' Dataverses, which includes about 40,000 studies and 665,000 files, according to IQSS' Merce Crosas. The software's open source design allows other researchers to add features that can be shared with the community of users.

Thursday, September 15, 2011

Blog: Intel Code Lights Road to Many-Core Future

Intel Code Lights Road to Many-Core Future
EE Times (09/15/11) Rick Merritt

Intel's release of open source code for a data-parallel version of Javascript seeks to help mainstream programmers who use scripting languages tap the power of multicore processors. Intel's Justin Rattner says in an interview that there will be multiple programming models, and Parallel JS encompasses one such model. The language enhances performance for data-intensive, browser-based apps such as photo and video editing and three-dimensional gaming running on Intel chips. Rattner describes Parallel JS as "a pretty important step that gets us beyond the prevailing view that once you are beyond a few cores, multicore chips are only for technical apps." A later iteration of Parallel JS also will exploit the graphics cores currently incorporated into Intel's latest processors. In addition, Intel is working on ways to enhance modern data-parallel tools that operate general-purpose programs on graphics processors, and those tools could be issued next year, Rattner says. He notes that beyond that, data-parallel methods require a more basic change to boost power efficiency by becoming more asynchronous.

Blog: Social Media for Dementia Patients

Social Media for Dementia Patients
SINTEF (09/15/11)

SINTEF researchers are developing a version of the popular Facebook social media site that offers a simpler user interface designed for elderly people and those with dementia. "Why should elderly people be excluded from the social media, which are the communication platform of the future?" says SINTEF researcher Tone Oderud. The researchers want to develop a Web-based communications application that is simple and secure for elderly and senile users, their relatives, and caregivers. They say that social media can become an important tool for improving the quality of life of elderly people, while easing the burden on therapists and caregivers. In testing, the application has shown that simple contact between relatives and the support services improved all users' security. "The tests have shown us that there is great potential for all in the fields of caregiving and digital communication," Oderud says.

Monday, September 12, 2011

Blog: In Plane View; using cluster analysis to discover what's normal

In Plane View
MIT News (09/12/11) Jennifer Chu

Massachusetts Institute of Technology professor John Hansman and colleagues have developed an airline health detection tool that identifies flight glitches without knowing ahead of time what to look for. The method uses cluster analysis, a type of data mining that filters data into subsets to find common patterns. Flight data outside the clusters is labeled as abnormal, enabling analysts to further inspect those reports to determine the nature of the anomaly. The researchers developed a data set from 365 flights that took place over one month. "The beauty of this is, you don't have to know ahead of time what 'normal' is, because the method finds what's normal by looking at the cluster," Hansman says. The researchers mapped each flight at takeoff and landing and found several flights that fell outside the normal range, mostly due to crew mistakes rather than mechanical flaws, according to Hansman. "To make sure that systems are safe in the future, and the airspace is safe, we have to uncover precursors of aviation safety accidents [and] these [cluster-based] analyses allow us to do that," says the U.S.'s National Aeronautics and Space Administration's Ashok Srivastava.

Friday, September 9, 2011

Blog: Google to Unveil 'Dart' Programming Language

Google to Unveil 'Dart' Programming Language
eWeek (09/09/11) Darryl K. Taft

Google plans to introduce a new programming language called Dart at the upcoming Goto conference. Dart is described as a structured Web programming language, and Google engineers Lars Bak and Gilad Bracha are scheduled to present it at Goto, which takes place Oct. 10-12 in Aarhus, Denmark. Bracha is the creator of the Newspeak programming language, co-author of the Java Language Specification, and a researcher in the area of object-oriented programming languages. Bak has designed and implemented object-oriented virtual machines, and has worked on Beta, Self, Strongtalk, Sun's HotSpot, OOVM Smalltalk, and Google's V8 engine for the Chrome browser. In 2009, Google introduced the experimental language Go in an attempt to combine the development speed of working in a dynamic language, such as Python, with the performance and safety of a compiled language such as C or C++.

Wednesday, September 7, 2011

Blog: New Forensics Tool Can Expose All Your Online Activity

New Forensics Tool Can Expose All Your Online Activity
New Scientist (09/07/11) Jamie Condliffe

Software developed by researchers from Stanford University can be used to bypass the encryption on a personal computer's hard drive to find what Web sites a user has visited and whether any data has been stored in the cloud. The team launched the Windows-based open source software, Offline Windows Analysis and Data Extraction (OWADE), at the Black Hat 2011 security conference. Most sensitive data on a hard drive, including browsing history, site logins, and passwords, uses an algorithm to generate an encryption key based on the standard Windows login. Elie Bursztein and colleagues discovered how to decrypt the files a year ago. OWADE combines their knowledge of how this system works with existing data-extraction techniques into a single forensics package. "We've built a tool that can reconstruct where the user has been online, and what identity they used," Bursztein says. Law enforcement would be able to use the tool to track sex offenders, but people who want to remain anonymous could potentially exploit the software and develop new ways of avoiding being caught.

Tuesday, September 6, 2011

Blog: NSA Extends Label-Based Security to Big Data Stores (key/value data stores - NoSQL]

NSA Extends Label-Based Security to Big Data Stores
IDG News Service (09/06/11) Joab Jackson

The U.S. National Security Agency (NSA) recently submitted Accumulo, new label-based data store software, to the Apache Software Foundation, hoping that more parties will continue to develop the technology for use in future secure systems. "We have made much progress in developing this project over the past [three] years and believe both the project and the interested communities would benefit from this work being openly available and having open development," say the NSA developers. Accumulo, which is based on Google's BigTable design, is a key/value data store, in which providing the system with the key will return the data associated with that key. Accumulo also can be run on multiple servers, making it a good candidate for big data systems. The system's defining feature is the ability to tag each data cell with a label, and a section called column visibility that can store the labels. "The access labels in Accumulo do not in themselves provide a complete security solution, but are a mechanism for labeling each piece of data with the authorizations that are necessary to see it," the NSA says. The new label-based storage system could be the basis of other secure data store-based systems, which could be used by healthcare, government agencies, and other parties with strict security and privacy requirements.

Saturday, September 3, 2011

Blog: Computers Can See You--If You Have a Mug Shot; aulity of facial recognition systems

Computers Can See You--If You Have a Mug Shot
Wall Street Journal (09/03/11) Carl Bialik

Carnegie Mellon University (CMU) researchers recently presented data suggesting that facial recognition tools could identify individuals based on posed mug shots. The researchers demonstrated that, in principle, 33 percent of people photographed could be matched with a database of photos taken from Facebook. As part of the study, the researchers used images of 93 volunteers and compared them to Facebook photos of people on the CMU network. The results mean no one using facial-recognition software can claim "I can recognize any person in the U.S. at this very moment," says CMU's Ralph Gross. The problem is taking one image and comparing it to a wide set of images to find a single correct match. Comparing photos of just one person is easier and has achieved much more success. In a recent U.S. National Institute of Standards and Technology test, facial recognition software misidentified individuals in photographs just one percent of the time. Compared to Facebook images, closed-circuit (CC) TV images will probably be even more difficult to use with facial recognition systems, according to computer-vision experts. "Identifying faces in CCTV-quality images requires human experts," says University of Cambridge professor John Daugman.

Friday, August 26, 2011

Blog: Hanging Can Be Life Threatening

Hanging Can Be Life Threatening
AlphaGalileo (08/26/11)

Although testing and static code analysis are used to detect and remove bugs in a system during development, problems can still occur once a software system is in place and is being used in a real-world application. Such problems can cause one critical component of the system to hang without crashing the whole system and without being immediately obvious to operators and users until it is too late. Researchers at the Universita degli Studi di Napoli Federico II and at Naples company SESM SCARL have developed a software tool that offers non-obtrusive monitoring of systems, based on multiple sources of data gathered at the operating system level and collected data. "Our experimental results show that this framework increases the overall capacity of detecting hang failures, it exhibits a 100 percent coverage of observed failures, while keeping low the number of false positives, less than 6 percent in the worst case," according to the researchers. They also say the response time, or latency, between a hang occurring and detection is about 0.1 seconds on average, while the impact on computer performance of running the hang-detection software is negligible.

Friday, August 12, 2011

Blog: Robot 'Mission Impossible' Wins Video Prize

Robot 'Mission Impossible' Wins Video Prize
New Scientist (08/12/11) Melissae Fellet

Free University of Brussels researchers have developed Swarmanoid, a team of flying, rolling, and climbing robots that can work together to find and grab a book from a high shelf. The robot team includes flying eye-bots, rolling foot-bots, and hand-bots that can fire a grappling hook-like device up to the ceiling and climb the bookshelf. Footage of the team in action recently won the video competition at the Conference on Artificial Intelligence. The robotic team currently consists of 30 foot-bots, 10 eye-bots, and eight hand-bots. The eye-robots explore the rooms, searching for the target. After an eye-bot sees the target, it signals the foot-bots, which roll to the site, carrying the hand-bots. The hand-bots then launch the grappling hooks to the ceiling and climb the bookshelves. All of the bots have light-emitting diodes that flash different colors, enabling them to communicate with each other. Constant communication enables Swarmanoid to adjust its actions on the fly, compensating for broken bots by reassigning tasks throughout the team.

Wednesday, August 10, 2011

Blog: Researcher Teaches Computers to Detect Spam More Accurately

Researcher Teaches Computers to Detect Spam More Accurately
IDG News Service (08/10/11) Nicolas Zeitler

Georgia Tech researcher Nina Balcan recently received a Microsoft Research Faculty Fellowship for her work in developing machine learning methods that can be used to create personalized automatic programs for deciding whether an email is spam or not. Balcan's research also can be used to solve other data-mining problems. Using supervised learning, the user teaches the computer by submitting information on which emails are spam and which are not, which is very inefficient, according to Balcan. Active learning enables the computer to analyze huge collections of unlabeled emails to generate only a few questions for the user. Active learning could potentially deliver better results than supervised learning, Balcan says. However, active learning methods are highly sensitive to noise, making this potentially difficult to achieve. Balcan plans to develop an understanding of when, why, and how different kinds of learning protocols help. "My research connects machine learning, game theory, economics, and optimization," she says.

Blog: How Computational Complexity Will Revolutionize Philosophy

How Computational Complexity Will Revolutionize Philosophy
Technology Review (08/10/11)

Massachusetts Institute of Technology computer scientist Scott Aaronson argues that computational complexity theory will have a transformative effect on philosophical thinking about a broad spectrum of topics such as the challenge of artificial intelligence (AI). The theory focuses on how the resources required to solve a problem scale with some measure of the problem size, and how problems typically scale either reasonably slowly or unreasonably rapidly. Aaronson raises the issue of AI and whether computers can ever become capable of human-like thinking. He contends that computability theory cannot provide a fundamental impediment to computers passing the Turing test. A more productive strategy is to consider the problem's computational complexity, Aaronson says. He cites the possibility of a computer that records all the human-to-human conversations it hears, accruing a database over time with which it can make conversation by looking up human answers to questions it is presented with. Aaronson says that although this strategy works, it demands computational resources that expand exponentially with the length of the conversation. This, in turn, leads to a new way of thinking about the AI problem, and by this reasoning, the difference between humans and machines is basically one of computational complexity.

View Full Article

Tuesday, August 9, 2011

Blog: Phone Losing Charge? Technology Created by UCLA Engineers Allows LCDs to Recycle Energy

Phone Losing Charge? Technology Created by UCLA Engineers Allows LCDs to Recycle Energy
University of California, Los Angeles (08/09/2011) Matthew Chin; Wileen Wong Kromhout

University of California, Los Angeles (UCLA) researchers have created an energy-harvesting polarizer for liquid crystal displays (LCDs) that enables them to collect and recycle energy to power electronic devices. The photovoltaic polarizers can convert ambient light, such as sunlight and their own backlight, into electricity. It can boost the function of an LCD by simultaneously working as a polarizer, a photovoltaic device, and an ambient light photovoltaic panel. "In addition, these polarizers can also be used as regular solar cells to harvest indoor or outdoor light," says UCLA professor Yang Yang. "So next time you are on the beach, you could charge your iPhone via sunlight." Up to 75 percent of a typical device's backlight energy is lost through polarizers, but the UCLA polarizing organic photovoltaic LCD can recover much of that unused energy. "I believe this is a game-changer invention to improve the efficiency of LCD displays," Yang says. "In the near future, we would like to increase the efficiency of the polarizing organic photovoltaics, and eventually we hope to work with electronic manufacturers to integrate our technology into real products."

Thursday, August 4, 2011

Blog: Wireless Network in Hospital Monitors Vital Signs

Wireless Network in Hospital Monitors Vital Signs
Washington University in St. Louis (08/04/11) Diana Lutz

Washington University in St. Louis researchers launched a prototype sensor network in Barnes-Jewish Hospital, with the goal of creating a wireless virtual intensive-care unit in which the patents are free to move around. When the system is fully operational, sensors will monitor the blood oxygenation level and heart rate of at-risk patients once or twice a minute. The data is transmitted to a base station and combined with other data in the patient's electronic medical record. The data is analyzed by a machine-learning algorithm that looks for signs of clinical deterioration, alerting nurses to check on patients when those signs are found. The clinical warning system is part of a new wireless health field that could change the future of medicine, says Washington University in St. Louis computer scientist Chenyang Lu. In developing the system, the researchers were focused on ensuring that the network would always function and never fail. The relay nodes are programmed as a self-organizing mesh network, so that if one node fails the data will be rerouted to another path. At the end of the trial, the researchers found that data were reliably received more than 99 percent of the time and the sensing reliability was 81 percent.

View Full Article

Wednesday, August 3, 2011

Blog: Web Search Is Ready for a Shakeup, Says UW Computer Scientist

Web Search Is Ready for a Shakeup, Says UW Computer Scientist
UW News (08/03/11) Hannah Hickey

University of Washington (UW) professor Oren Etzioni recently called on the international academic community and engineers to be more ambitious in designing how users find information online. The main obstacle to progress "seems to be a curious lack of ambition and imagination," Etzioni writes. "Despite all the talent and the data they have, I don't think they've been ambitious enough." When IBM's Watson supercomputer recently beat the best human players at Jeopardy, it showed how far the technology has come in being able to answer complex questions. However, as the ability to perform intelligent searches increases, so does the demand. "People are going to be clamoring for more intelligent search and a more streamlined process of asking questions and getting answers," Etzioni says. Instead of looking for strings of text, a Web search engine should identify basic entities, such as people, places and things, and discover the relationships between them, which is the goal of UW's Turing Center. The center has developed ReVerb, an open source tool that uses Web information to determine the relationship between two entities.

View Full Article

Wednesday, July 27, 2011

Blog: Protecting Networks Is Just a Game

Protecting Networks Is Just a Game
EurekAlert (07/27/11)

A defensive strategy for computer networks based on game theory is more effective than previous methods, says Iona College information technologist Heechang Shin, who developed an anti-hacking tool that plays a game of reality versus forecast. Called defensive forecasting, the tool wins when reality matches its forecast, and then sends out an alert to block an attempt to attack the computer network. The tool works on real-time data flowing in and out of the network, rather than analyzing logs, and detects intrusions as they are happening. Shin's game theory model continuously trains the tool so that it can recognize the patterns of typical network attacks. To measure the effectiveness of the tool, Shin compared it using the semi-synthetic dataset generated from a raw TCP/IP dump data by simulating a typical U.S. Air Force local-area network to a network intrusion system based on a support vector machine (SVM), which is one of the best classification methods for detection. During testing, the tool was as good or better than one based on SVM for detecting network intrusion while adding the benefits of real-time detection.

View Full Article

Tuesday, July 26, 2011

Blog: Crowdsourced Online Learning Gives Robots Human Skills

Crowdsourced Online Learning Gives Robots Human Skills
New Scientist (07/26/11) Jim Giles

Roboticists are experimenting with using crowdsourcing to teach robots more general skills. By allowing users to pilot real or simulated robots over the Internet in trial experiments, the researchers hope to create machines that can simulate a human's flexibility and dexterity. "Crowdsourcing is a really viable path toward getting robots to do things that are useful for people," says Brown University's Chad Jenkins. Crowdsourcing also can be used to develop better human-robot interactions, says Worcester Polytechnic Institute's Sonia Chernova. She has led a team that developed Mars Escape, an online game in which two users each control an avatar, one human and one robot, to collect information on teamwork, social interaction, and communication. After more than 550 game sessions, the researchers looked for patterns in the data, such as methods that players frequently used to retrieve objects, and phrases they exchanged when doing so. The researchers then set up a mock real-life version of the game in which visitors were paired with a robot powered by software based on the Mars Escape data. During testing, most of the visitors said the robot behaved rationally and contributed to the team's success.

View Full Article

Monday, July 25, 2011

Blog: Sandia's CANARY Software Protects Water Utilities From Terrorist Attacks and Contaminants, Boosts Quality

Sandia's CANARY Software Protects Water Utilities From Terrorist Attacks and Contaminants, Boosts Quality
Sandia National Laboratories (07/25/11) Heather Clark

Researchers at Sandia National Laboratories and the U.S. Environmental Protection Agency have developed the CANARY Event Detection Software, an open source program that monitors public water systems to protect them from terrorist attacks or natural contaminants. The CANARY software tells utility operators whether something is wrong with their water system within minutes. CANARY can be customized for individual utility systems with their own sensors and software, according to Sandia's Sean McKenna. The researchers used algorithms to analyze data coming from multiple sensors and differentiate between natural variability and unusual patterns that indicate a problem. When new data is received, CANARY determines whether it is close enough to a known cluster to be considered normal or whether it is far enough away to be deemed anomalous. An unintended benefit of the software is that when utility operators better understood the data being sent by their sensors, they could make changes to the management of the water systems to improve its overall quality.

View Full Article

Blog: Minority Rules: Scientists Discover Tipping Point for the Spread of Ideas

Minority Rules: Scientists Discover Tipping Point for the Spread of Ideas
RPI News (07/25/11) Gabrielle DeMarco

Rensselaer Polytechnic Institute researchers have found that just 10 percent of a population is enough to sway the majority of a society. The researchers used computational and analytical methods to discover the tipping point in which a minority belief becomes the majority opinion. The research also found that the percent of committed opinion holders required to shift majority opinion does not change significantly with the type of network in which the opinion holders are working. The researchers developed computer models of three types of social networks. The first network had each person connect to every other person in the network, the second model had a few individuals serve as hubs, and the third model gave every person in the network about the same number of connections. After the networks were constructed, the researchers planted a few "true" believers into each of the networks. As the true believers began to interact with the others in the network, the opinion of a majority of the individuals gradually, and then very rapidly, began to shift.

View Full Article

Blog: Cornell Computers Spot 'Opinion Spam'

Cornell Computers Spot 'Opinion Spam'
Cornell Chronicle (07/25/11) Bill Steele

Cornell University researchers have developed software that can identify opinion spam, which are phony positive reviews created by sellers to help sell their products, or negative reviews meant to downgrade competitors. In a test of 800 reviews of Chicago-area hotels, the program was able to identify deceptive reviews with almost 90 percent accuracy. The researchers, led by professors Claire Cardie and Jeff Hancock, found that truthful hotel reviews were more likely to contain concrete words that had to do with the hotel, such as "bathroom," "check-in," or "price," while deceptive reviews contained scene-setting words, such as "vacation," "business trip," and "my husband." In general, deceivers use more verbs and honest reviewers use more nouns. The researchers found that the best results came from combining keyword analysis with the ways certain words are combined in pairs. The next step will be to see if the system can be extended to other categories, such as restaurants and consumer products, says Cornell graduate student Myle Ott.

View Full Article

Blog Archive