Friday, July 31, 2009

Blog: Behaviour of Building Block of Nature Could Lead to Computer Revolution

Behaviour of Building Block of Nature Could Lead to Computer Revolution
University of Cambridge (07/31/09)

Physicists from the Universities of Cambridge and Birmingham have demonstrated that electrons in narrow wires can split into two new particles called spinons and holons. Like-charged electrons repel each other and must adjust their movements to avoid getting too close to each other, and this effect is exacerbated in extremely narrow wires. It was theorized in 1981 that under these conditions and at the lowest temperatures the electrons would be permanently divided into spinons and holons. Accomplishing this required confining electrons in a quantum wire that is brought in close enough proximity to an ordinary metal so that the electrons in the metal could "jump" into the wire by quantum tunneling. The Cambridge and Birmingham physicists observed how the electron, on penetrating the quantum wire, split into spinons and holons by watching how the rate of jumping varied with an applied magnetic field. "Quantum wires are widely used to connect up quantum 'dots,' which may in the future form the basis of ... a quantum computer," notes Chris Ford with the University of Cambridge's Cavendish Laboratory. "Thus understanding their properties may be important for such quantum technologies, as well as helping to develop more complete theories of superconductivity and conduction in solids in general. This could lead to a new computer revolution."

View Full Article

Thursday, July 30, 2009

Blog: How Wolfram Alpha Could Change Software

How Wolfram Alpha Could Change Software
InfoWorld (07/30/09) McAllister, Neil

Wolfram Research's Wolfram Alpha software is described as a computational knowledge engine that employs mathematical methods to cross-reference various specialized databases and generate unique results for each query. Furthermore, Wolfram alleges that each page of results returned by the Wolfram Alpha engine is a unique, copyrightable work because its terms of use state that "in many cases the data you are shown never existed before in exactly that way until you asked for it." Works produced by machines are copyrightable, at least in theory. But for Wolfram Alpha to claim copyright protection for its query results, its pages must be such original presentations of information that they are eligible as novel works of authorship. Although Wolfram says its knowledge engine is driven by exclusive, proprietary sources of curated data, many of the data points it works with are actually commonplace facts. If copyright applies to Wolfram Alpha's output in certain instances, then by extension the same rules are relevant to every other information service in similar cases. Assuming that unique presentations based on software-based manipulation of mundane data can be copyrighted, the question remains as to who retains what rights to the resulting works.

View Full Article

Wednesday, July 29, 2009

Blog: A Better Way to Shoot Down Spam

A Better Way to Shoot Down Spam
Technology Review (07/29/09) Kremen, Rachel

The Spatio-temporal Network-level Automatic Reputation Engine (SNARE) is an automated system developed at the Georgia Institute for Technology that can spot spam before it hits the mail server. SNARE scores each incoming email according to new criteria that can be gathered from a single data packet. The researchers say the system puts less pressure on the network and keeps the need for human intervention to a minimum while maintaining the same accuracy as conventional spam filters. Analysis of 25 million emails enabled the Georgia Tech team to compile characteristics that could be culled from a single packet of data and used to efficiently identify spam. They also learned that they could identify junk email by mapping out the geodesic distance between the Internet Protocol (IP) addresses of the sender and receiver, as spam tends to travel farther than legitimate email. The researchers also studied the autonomous server number affiliated with an email. SNARE can spot spam in seven out of 10 instances, with a false positive rate of 0.3 percent. If SNARE is deployed in a corporate environment, the network administrator could establish rules about the disposition of email according to its SNARE score. Northwestern University Ph.D. candidate Dean Malmgren questions the effectiveness of SNARE once its methodology is widely known, as spammers could use a bogus IP address close to the recipient's to fool the system. Likewise, John Levine of the Coalition Against Unsolicited Commercial Email warns that "spammers are not dumb; any time you have a popular scheme [for identifying spam], they'll circumvent it."

View Full Article

Friday, July 24, 2009

Blog: Scale-Free Networks: A Decade and Beyond

Scale-Free Networks: A Decade and Beyond
Science (07/24/09) Vol. 325, No. 5939, P. 412; Barabasi, Albert-Laszlo

Early network models were predicated on the notion that complex systems were randomly interconnected, but research has shown that networks exhibit perceptibly nonrandom features through what is termed the scale-free property. Moreover, this network architecture has demonstrated universality through its manifestation in all kinds of real networks, regardless of their age, scope, and function. All systems seen as complex are comprised of an incredibly large number of elements that interact through intricate networks. The scale-free nature of networks has been proven thanks to improved maps and data sets as well as correspondence between empirical data and analytical models that predict network structure. The existence of the scale-free property validates the indivisibility of the structure and evolution of networks, and has forced researchers to accept the concept that networks are in a constant state of flux due to the arrival of nodes and links. The universality of diverse topological traits, such as motifs, degree distributions, degree correlations, and communities, function as a platform to analyze variegated phenomena and make predictions. By understanding the behavior of systems perceived as complex, researchers can plot out such things as the Internet's response to attacks and cells' reactions to environmental changes. This requires achieving a comprehension of networks' dynamical phenomena.

View Full Article - May Require Free Registration

Tuesday, July 21, 2009

Blog: Moore's Law Hits Economic Limits

Moore's Law Hits Economic Limits
Financial Times (07/21/09) P. 15; Nuttall, Chris

Much attention has been given to the approaching scientific limit to chip miniaturization and the continuation of Moore's Law, but an economic limit is nearing faster than the predicted scientific limit, according to some experts. "The high cost of semiconductor manufacturing equipment is making continued chipmaking advancements too expensive to use for volume production, relegating Moore's Law to the laboratory and altering the fundamental economics of the industry," says iSuppli's Len Jelinek. He predicts that Moore's Law will no longer drive volume chip production after 2014 because circuitry widths will dip to 20 nanometers or below by that date, and the tools to make those circuits will be too expensive for companies to recover their costs over the lifetime of production. The costs and risks associated with building new fabrication systems have already forced many producers of logic chips toward a fabless chip model, in which they outsource much of their production to chip foundries in Asia. At the 90nm level there were 14 chipmakers involved in fabrication, but at the 45nm level that number has been reduced to nine, and only two of them, Intel and Samsung, have plans to create 22nm factories. However, Intel's Andy Bryant says that as long as demand is maintained by consumers and businesses looking for the most advanced technology, Moore's Law, and the major investments it requires, will continue to make economic sense.

View Full Article

Blog: Yale Researchers Create Database-Hadoop Hybrid

Yale Researchers Create Database-Hadoop Hybrid
Computerworld (07/21/09) Lai, Eric

Yale University professor Daniel J. Abadi has led the development of HadoopDB, an open source parallel database management system (DBMS) that combines the data-processing capabilities of a relational database with the scalability of new technologies such as Hadoop and MapReduce. HadoopDB was developed using components from PostgreSQL, the Apache Hadoop data-sorting technology, and Hive, the international Hadoop project launched by Facebook. HadoopDB queries can be submitted as either MapReduce or in SQL language. Abadi says data processing is partially done in Hadoop and partially in "different PostgreSQL instances" spread out over several nodes in a shared-nothing cluster of machines. He says that unlike previously developed DBMS projects, HadoopDB is not a hybrid only at the language/interface level, but also at the systems implementation level. Abadi says HadoopDB combines the best of both approaches to achieve the fault tolerance of massively parallel data infrastructures, such as MapReduce, in which a single server failure has little effect on the overall grid, and is capable of performing complex analyses almost as quickly as existing commercial parallel databases. He says that as databases continue to grow, systems such as HadoopDB will "scale much better than parallel databases."

View Full Article

Monday, July 20, 2009

Blog: How Networking Is Transforming Healthcare

How Networking Is Transforming Healthcare
Network World (07/20/09) Marsan, Carolyn Duffy

High-speed computer networks have the potential to transform the healthcare industry, according to Mike McGill, program director for Internet2's Health Sciences Initiative. Internet2's Health Network Initiative is a project to help medical researchers, educators, and clinicians see the possibilities of network applications in a medical setting. McGill notes, for instance, that Internet2 has demonstrated the ability to put a patient up on a telepresence environment with a remote psychiatrist, which would go a long way toward fulfilling the U.S. Department of Veterans Affairs' mandate to provide care for wounded soldiers, who often reside in rural areas but are assigned psychiatrists based in urban areas. McGill points out that 120 of the 125 medical schools in the United States are Internet2 members, and he says that Internet2 is the designated national backbone for the Federal Communications Commission's Rural Health Care Pilot Program. Groups that the Health Network Initiative has spawned include those focusing on security, the technical aspects, network resources, and education. McGill says the Obama administration is currently pushing "for electronic health records with very limited capability," and notes that the Health Sciences Initiative is "working on electronic health records that are backed up by lab tests and images, and that's a whole lot richer of an environment than just the textual record." Another project the initiative is focused on is the creation of a cancer biomedical informatics grid that is linking all U.S. cancer centers so that the research environment can be unified to exchange data. McGill describes the last mile and cultural challenges as the key networking challenges in terms of electronic health information sharing.

View Full Article

Blog: Can Pen and Paper Help Make Electronic Medical Records Better?

Can Pen and Paper Help Make Electronic Medical Records Better?
IUPUI News Center (07/20/09) Aisen, Cindy Fox

Using pen and paper occasionally can make electronic medical records even more useful to healthcare providers and patients, concludes a new study published in the International Journal of Medical Informatics. The study, "Exploring the Persistence of Paper with the Electronic Health Record," was led by Jason Saleem, a professor in the Purdue School of Engineering and Technology at Indiana University-Purdue University Indianapolis. "Not all uses of paper are bad and some may give us ideas on how to improve the interface between the healthcare provider and the electronic record," Saleem says. In the study of 20 healthcare workers, the researchers found 125 instances of paper use, which were divided into 11 categories. The most common reasons for using paper workarounds were efficiency and ease of use, followed by paper's capabilities as a memory aid and its ability to alert others to new or important information. For example, a good use of paper was the issuing of pink index cards to newly arrived patients at a clinic who had high blood pressure. The information was entered into patients' electronic medical records, but the pink cards allowed physicians to quickly identify a patient's blood pressure status. Noting that electronic systems can alert clinicians reliably and consistently, the study recommends that designers of these systems consider reducing the overall number of alerts so healthcare workers do not ignore them due to information overload.

View Full Article

Blog: Can Computers Decipher a 5,000-Year-Old Language?

Can Computers Decipher a 5,000-Year-Old Language?
Smithsonian.com (07/20/09) Zax, David

One of the greatest mysteries of the ancient world is the meaning of the Indus civilization's language, and University of Washington, Seattle professor Rajesh Rao is attempting to crack the 5,000-year-old script using computational techniques. He and his colleagues postulated that such methods could reveal whether the Indus script did or did not encode language by measuring the degree of randomness in a sequence, also known as conditional entropy. Rao's team employed a computer program to measure the script's conditional entropy, and then measured the conditional entropy of several natural languages, the artificial Fortran computer programming language, and non-linguistic systems such as DNA sequences. Comparison between these various readings found that the Indus script's rate of conditional entropy bore the closest resemblance to that of the natural languages. Following the publication of the team's findings in the May edition of Science, Rao and colleagues are studying longer strings of characters than they previously examined. "If there are patterns, we could come up with grammatical rules," Rao says. "That would in turn give constraints to what kinds of language families" the Indus script might belong to.

View Full Article

Blog: New Technology to Make Digital Data Self-Destruct

New Technology to Make Digital Data Self-Destruct
New York Times (07/20/09) Markoff, John

University of Washington computer scientists have developed software that enables electronic documents to automatically destroy themselves after a certain period of time. The researchers say the software, dubbed Vanish, will be needed as an increasing number of personal and business documents are moved from being stored on personal computers to centralized machines or servers as the cloud computing trend grows. The concept of having digital information disappear after a period of time is not new, but the researchers say they have developed a unique approach that relies on "shattering" an encryption key that is widely distributed across a peer-to-peer file sharing system. Vanish uses a key-based encryption system in a new way, allowing for a decrypted message to be automatically re-encrypted at a specific point in the future without fear that a third party will be able to access the key needed to read the message. The researchers say that pieces of the key "erode" over time as they are gradually used less and less. To make the keys erode, Vanish uses the structure of peer-to-peer file systems, which are based on millions of personal computers that join and leave the network at will, creating frequently changing Internet addresses, making it incredibly difficult for an eavesdropper to reassemble the key because it is never held in a single location. A major advantage of Vanish is that it does not rely on the integrity of third parties, as other systems do.

View Full Article

Wednesday, July 15, 2009

Blog: What Is Google App Engine?

What Is Google App Engine?

Google App Engine lets you run your web applications on Google's infrastructure. App Engine applications are easy to build, easy to maintain, and easy to scale as your traffic and data storage needs grow. With App Engine, there are no servers to maintain: You just upload your application, and it's ready to serve your users.

You can serve your app from your own domain name (such as http://www.example.com/) using Google Apps. Or, you can serve your app using a free name on the appspot.com domain. You can share your application with the world, or limit access to members of your organization.

Google App Engine supports apps written in several programming languages. With App Engine's Java runtime environment, you can build your app using standard Java technologies, including the JVM, Java servlets, and the Java programming language—or any other language using a JVM-based interpreter or compiler, such as JavaScript or Ruby. App Engine also features a dedicated Python runtime environment, which includes a fast Python interpreter and the Python standard library. The Java and Python runtime environments are built to ensure that your application runs quickly, securely, and without interference from other apps on the system.

With App Engine, you only pay for what you use. There are no set-up costs and no recurring fees. The resources your application uses, such as storage and bandwidth, are measured by the gigabyte, and billed at competitive rates. You control the maximum amounts of resources your app can consume, so it always stays within your budget.

App Engine costs nothing to get started. All applications can use up to 500 MB of storage and enough CPU and bandwidth to support an efficient app serving around 5 million page views a month, absolutely free. When you enable billing for your application, your free limits are raised, and you only pay for resources you use above the free levels.

_______________________________

Blog: Researchers to Spotlight Darknets at Black Hat

Researchers to Spotlight Darknets at Black Hat
PC World (07/15/09) Vamosi, Robert

Hewlett-Packard security researchers Billy Hoffman and Matt Wood are planning to demonstrate a darknet that is capable of running entirely within a browser at the upcoming Black Hat USA conference. The darknet allows decentralized, private peer-to-peer communications, like other darknets, though it runs through a proof-of-concept browser called Veiled. Hoffman and Wood say their darknet offers a number of advantages over other darknets. For instance, their darknet is easier to use than darknets such as WASTE, which is used in the academic world to share data among researchers. In addition, the Veiled browser has a feature called Web-in-Web that enables darknet users to create private Web pages with links to content that is only available in the darknet. Like other darknets, the darknet created by Hoffman and Wood cannot be viewed by average users on the Internet, which means that it is ideal for users who want to create Web pages that they want to hide from the government, for example. Hoffman and Wood say that their talk at Black Hat USA will not provide many details about their darknet or Veiled, but they note that it will allow anyone with a passing knowledge of Web technology to go out and create their own darknet.

View Full Article

Wednesday, July 8, 2009

Blog: Memristor Minds: The Future of Artificial Intelligence

Memristor Minds: The Future of Artificial Intelligence
New Scientist (07/08/09) Mullins, Justin

The lack of a rigorous mathematical foundation for electronics impelled engineer Leon Chua to develop one, which led to the formulation of the memristor--a theoretical fourth basic circuit element in addition to the resistor, capacitor, and inductor where electric charge and magnetic flux come together. Since then the creation of memristors has been achieved, and their novel abilities might unlock key insights about the human brain that would be a tremendous step forward for the field of artificial intelligence. Advantages of memristors include rapid, nanosecond writing of data using a very small amount of energy, and retention of memristive memory even when the power is turned off. The most immediate potential application for memristors is as a flash memory replacement, while durability improvements should make memristors ideal for a superfast random access memory, says Hewlett-Packard (HP) Laboratories Fellow Stan Williams. The discovery that a slime mold was behaving in the manner of a memristive circuit in that it could memorize a pattern of events without the aid of a neuron inspired a physicist at the University of California, San Diego to construct a circuit capable of learning and predicting future signals. Much earlier, Chua had noticed a sharp similarity between synapse behavior and memristor response, leading to speculation that memristors might help engineer an electronic intelligence that can mimic the power of a brain. The U.S. Defense Advanced Research Projects Agency has embarked on a project to create "electronic neuromorphic machine technology that is scalable to biological levels." HP's Greg Snider has envisioned the field of cortical computing that focuses on the potential of memristors to imitate the interaction of the brain's neurons. He and Williams are working with Boston University scientists to devise hybrid transistor-memristor chips that aim to replicate some of the brain's thought processes.

View Full Article

Wednesday, July 1, 2009

Blog: Twente Researcher Develops Self-Learning Security System for Computer Networks

Twente Researcher Develops Self-Learning Security System for Computer Networks
University of Twente (07/01/09) Bruysters, Joost

University of Twente researcher Damiano Bolzoni has developed SilentDefense, an anomaly network intrusion detection system that could lead to a new generation of network security systems. There are two types of network intrusion detection systems. The first uses a database of all known attacks to identify signatures of commonly used methods, but these systems have difficulty stopping new attack methods. The second uses anomaly detection, essentially learning how the network is normally used and searching for any deviation from the standard pattern. Bolzoni says anomaly detection is not widely used because truly effective systems are not commercially available, but he says SilentDefense will rectify this shortcoming. SilentDefense is based on self-learning algorithms, which significantly improves the accuracy of the system and reduces the odds of false positives. Bolzoni says the ideal network intrusion detection system is not one type or another but a combination of the two. However, before such a system can be created, he says a better anomaly detection system needs to be developed.

Blog Archive