Tuesday, December 30, 2008

Blog: Experts Uncover Weakness in Internet Security

Experts Uncover Weakness in Internet Security
Ecole Polytechnique Federale de Lausanne (12/30/08) Luy, Florence

Security researchers in Europe and California have discovered a vulnerability in the Internet digital certificate infrastructure that could allow attackers to forge certificates that are trusted by all common Web browsers. The weakness makes it possible to impersonate secure Web sites and email servers to perform undetectable phishing attacks. Whenever a small padlock appears in a browser window, the Web site being visited is secured using a digital certificate from a Certification Authority (CA). To ensure the certificate is authentic, the browser verifies the signature using cryptographic algorithms. The researchers discovered that one of these algorithms, known as MD5, can be misused. The first known flaw in the MD5 algorithm was presented in 2004 at the annual Crypto cryptography conference by Chinese researchers, who performed a collision attack and created two different messages with the same digital signature. The initial attack was severely limited, but a much stronger collision attack has been found by the European and California researchers. The new method proves it is possible to create a rogue CA that is trusted by all major Web browsers. A rogue CA, combined with a known vulnerability in the Domain Name System protocol, could allow attackers to launch virtually undetectable phishing attacks. The researchers say MD5 can no longer be trusted as a secure cryptographic algorithm for use in digital signatures and certificates. Arjen Lenstra, head of EPFL's Laboratory for Cryptologic Algorithms, says the developers of the major Internet browsers have been informed of the vulnerability.

View Full Article

Friday, December 19, 2008

Blog: Computing in a Molecule; designing a logic gate with 30 atoms

Computing in a Molecule
ICT Results (12/19/08)

Scientists from 15 European academic and industrial research institutions are working on the European Union-funded Pico-Inside project, which was established to develop a molecular replacement for transistors. The researchers, led by Christian Joachim of the Centre for Material Elaboration & Structural Studies at the French National Scientific Research Centre, say the use of molecular-sized computer components could lead to atomic-scale computing. Joachim notes that nanotechnology focuses on shrinking parts down to the smallest size possible, while the Pico-Inside team is working from the opposite end by starting with the atom and determining if such a small bit of matter can be used to create a logic gate, memory source, or other component. "The question we have asked ourselves is how many atoms does it take to build a computer?" he says. "That is something we cannot answer at present, but we are getting a better idea about it." So far, the researchers have designed a logic gate with 30 atoms that performs the same task as 14 transistors. The researchers also have explored the architecture, technology, and chemistry needed to achieve computing at the molecular scale and to interconnect molecules. Project researchers have focused on two architectures, one that mimics the classical design of a logic gate, and another, more complex process that relies on changes to the molecule's conformation to execute logic-gate inputs and quantum mechanics to perform the computation.

View Full Article

Blog: Plugging a Password Leak [by web browsers]

Plugging a Password Leak
Technology Review (12/19/08) Kremen, Rachel

Researchers from Harvard University's Center for Research on Computation and Society, the University of California, Berkeley, and Stanford University have improved the security of browser-based automatic log-in procedures. The researchers focused on password managers created with browser bookmarklets that use JavaScript to automatically log in a user to a Web site. The researchers identified a major flaw in bookmarklets in which an attack could trick bookmarklets into revealing all of a user's passwords. Bookmarklet-based password managers generally store passwords on a central server, and when a user visits one of those sites the user is automatically logged in. However, the researchers found that bookmarklets could not be trusted to know what Web site the user was actually visiting, meaning a few lines of code would be enough to trick the system into logging into a malicious site. The researchers found a solution that checks the referrer header instead of checking a browser window's location. The improved bookmarklet uses the secure socket layer (SSL) data transfer protocol to prevent the header from being easily forged. The researchers say that in the future a new browser feature called postMessage will enable browser windows to securely transfer information back and forth, providing even better security than the SSL solution.

View Full Article

Tuesday, December 16, 2008

Blog: Dartmouth Researchers Develop Computational Tool to Untangle Complex Data

Dartmouth Researchers Develop Computational Tool to Untangle Complex Data
Dartmouth News (12/16/08) Knapp, Susan

Dartmouth College researchers have developed the Partition Decoupling Method (PDM), a mathematical tool that can be used to untangle the underlying structure of time-dependent, interrelated, complex data. "With respect to the equities market, we created a map that illustrated a generalized notion of sector and industry, as well as the interactions between them, reflecting the different levels of capital flow, among and between companies, industries, sectors, and so forth," says Dartmouth professor Daniel Rockmore, who led the development effort. "In fact, it is this idea of flow, be it capital, oxygenated blood, or political orientation, that we are capturing." Capturing flow patterns is critical to understanding the subtle interdependencies found in the different components of complex systems. Analysis of the network of correlations is done using spectral analysis. The analysis is combined with statistical learning tools to uncover regions where the flow circulates more than would be expected at random. The result creates a detailed analysis of the interrelations and provides a wide view of the coarse-scale flow as a whole. Rockmore says PDM uses a different approach than similar programs designed to find how complex systems behave, and because it is not strictly hierarchical, PDM does not constrain interconnectivity.

View Full Article

Blog: Dynamic Language Use Pops in Asia

Dynamic Language Use Pops in Asia
eWeek (12/16/08) Taft, Darryl K.

Dynamic programming languages such as PHP, Perl, JavaScript, Ruby, and Python have caught on in a big way in Asia, according to an Evans Data survey of more than 400 software developers in the Asia-Pacific region. Evans Data found that 88 percent of developers use dynamic languages some of the time, and more than 40 percent said they use one more than half of the time. Most Asian developers use JavaScript, but PHP also is used in some of the projects of 45 percent of developers. Overall, the use of dynamic languages is likely to remain the same in 2009, but the use of Perl could decline while the use of ActionScript rises. "Software developers are always looking for ways to shed unneeded complexity and outdated methodologies and move to approaches that make programming simpler and faster, especially as more and more development is Web-centric," says Evans Data CEO John Andrews. "The high use of dynamic languages in Asia Pacific is consistent with the high concentration of Web application development being conducted in that region." The study also found that more than 20 percent of developers plan to launch cloud projects within the next six months, and 60 percent expect their development for devices to increase.

View Full Article

Friday, December 12, 2008

Blog: The many-core performance wall; the "memory wall"

The many-core performance wall

Posted by Robin Harris; December 11th, 2008 @ 7:01 am

Many-core chips are the great hope for more performance but Sandia National Lab simulations show they are about to hit a memory wall. How bad is it?

Thursday, December 11, 2008

Blog: W3C Upgrades Web Accessibility Standards; WCAG 2.0

W3C Upgrades Web Accessibility Standards
InternetNews.com (12/11/08) Adhikari, Richard

The World Wide Web Consortium (W3C) has released version 2.0 of its Web Content Accessibility Guidelines (WCAG), which are designed to help developers make Web sites and online content easier to access for users with disabilities. W3C plans to make WCAG 2.0 an international standard. "There were a lot of different guidelines for accessibility developed by different countries and organizations, so in WCAG 2.0 we had this huge effort to develop something that would work well across all these different needs and become like a unified international standard," says W3C Web Accessibility Initiative director Judy Brewer. "There's been considerable international interest in accepting this as a standard." So far, WCAG 2.0 has received support from Adobe, IBM, Boeing, Microsoft, the Chinese and Japanese governments, and the European Commission for Information Society and Media. WCAG 2.0 offers several improvements over the previous version, including support for new Web technologies. "WCAG 1.0 was specifically for HTML, but the Web uses other technologies now, and we wanted an updated standard that would cover any technologies and also give developers more flexibility," Brewer says.

View Full Article

Blog: Firefox Tops List of Most Known Vulnerabilities in Applications

SANS NewsBites Vol. 10 Num. 97

Fri, 12 Dec 2008

--Firefox Tops List of Most Known Vulnerabilities in Applications (December 11, 2008)

Whitelisting company Bit9 has compiled statistics on the applications with the most security vulnerabilities reported over the last year.

Mozilla's Firefox web browser versions 2 and 3 top the list with 40 reported flaws. Adobe Acrobat versions 8.1.1 and 8.1.2 follow with 31 reported flaws. Windows Live (MSN) Messenger versions 4.7 and 5.1 came in third with 19 flaws. Fourth and fifth place were taken by Apple iTunes versions 3.2 and 3.1.2 and Skype version 3.5.0.248, respectively.

http://www.vnunet.com/vnunet/news/2232492/firefox-tops-app-vulnerability

http://www.bit9.com/news-events/press-release-details.php?id=102

Wednesday, December 10, 2008

Blog: Mind-Controlled Robotic Limbs Become the Ants-Pants

Mind-Controlled Robotic Limbs Become the Ants-Pants
Computerworld Australia (12/10/08) Edwards, Katheryn

University of Technology Sydney (UTS) researchers have developed prosthetic limbs that respond to brain signals by mimicking the nonelectric signals used by the central nervous system to control muscle activity. Artificial intelligence researchers used the complex interactions between ants to develop a pattern recognition formula to identify bioelectric signals that can be used in human trials. The behavior of social insects such as ants helps scientists understand the body's electrical signals enabling them to create a robotic prosthesis that can be operated by human thought, says UTS Ph.D. student Rami Khushaba. Khushaba is developing a mathematical basis for identifying which biosignals relate to particular arm movements and where electrodes should be placed to capture those signals. Nature's abundance of swarm intelligence algorithms was a major reason for using them in the development of the prosthesis, along with their use of multi-agent techniques to solve specific problems. "We can use the behavior of the ants to enhance the quality of the control systems that we employ with the robotic limbs," Khushaba says. The researchers create a map of the voluntary intent of the central nervous system by attaching sensors to the limb following an amputation to record an electromyogram. Only a few seconds of data is needed to train the system to identify patterns in the raw data during the online testing phase. Khushaba says the biggest challenge to the system's success will be maintaining speed and accuracy.

View Full Article

Monday, December 8, 2008

Blog: Microsoft develops open-source content-management system

Microsoft develops open-source content-management system

Mary Jo Foley, December 8th, 2008 @ 12:33 pm

Microsoft has developed and released via its CodePlex site an alpha version of a new open-source content- management system, codenamed "Oxite." Microsoft is positioning Oxite as more than just a blogging engine, claiming it can support even large Web sites and is customizable.

READ FULL STORY

Saturday, December 6, 2008

Blog: Thieves Winning Online War, Maybe Even in Your Computer

Thieves Winning Online War, Maybe Even in Your Computer
New York Times (12/06/08) P. A1; Markoff, John

Malware continues to overcome security professionals' efforts to defend against it. "Right now the bad guys are improving more quickly than the good guys," says SRI International's Patrick Lincoln. As businesses and individuals become increasingly involved in online communities, cybercriminals are given more opportunities to infect machines and commit crimes. The Organization for Security and Cooperation in Europe estimates that credit card thefts, bank fraud, and other online scams rob computer users of $100 billion annually. In late October, the RSA FraudAction Research Lab discovered a cache of 500,000 credit-card numbers and bank account log-ins that were stolen by a network of zombie computers run by an online gang. "Modern worms are stealthier and they are professionally written," says British Telecom chief security technology officer Bruce Schneier. "The criminals have gone upmarket, and they're organized and international because there is real money to be made." Meanwhile, malicious programs are becoming increasingly sophisticated, with some programs searching for the most recent documents on the assumption that they are the most valuable and others stealing log-in and password information for consumer finances. Microsoft researchers recently discovered malware that runs Windows Update after it infects a machine to ensure the machine is protected from other pieces of malware. Purdue University computer scientist Eugene Spafford is concerned that companies will cut back on computer security to save money. "In many respects, we are probably worse off than we were 20 years ago," he says, "because all of the money has been devoted to patching the current problem rather than investing in the redesign of our infrastructure."

View Full Article

Wednesday, December 3, 2008

Blog: How to Run a Million Jobs; megajobs, processes that involve thousands to millions of similar or identical, though still independent, jobs using different processors

How to Run a Million Jobs
International Science Grid This Week (12/03/08) Heavey, Anne; Williamson, Amelia; Abramson, David

Experts at the recent SC08 conference held a session to discuss emerging solutions for dealing with the challenges of running megajobs, processes that involve thousands to millions of similar or identical, though still independent, jobs using different processors. Researchers want to be able to easily specify and manage such tasks, and to readily identify successful and failed jobs. The University of Chicago's Ben Clifford says that as tools and resources change, people describe their computing jobs differently. Some established job management solutions contain a variety of features, but they tend to have a high overhead in scheduling and they are inefficient at executing many short jobs on numerous processors. Other systems are designed specifically for the data-intensive, loosely coupled, high-throughput computing grid model, which works well for many thousands of jobs, both short and long. Ioan Raicu and Ian Foster, both from the University of Chicago and Argonne National Laboratory, have designed a class of applications called Many Tasks Computing (MTC), which is an application composed of many tasks, both independent and dependent, that are "communication-intensive but not naturally expressed in Message Passing Interface," Foster says. Unlike high throughput computing, MTC uses numerous computing resources over short periods of time to process tasks. Some computer systems are being altered to run megajobs, including IBM's new throughput, grid-style mode on the Blue Gene/P supercomputer. The University of Chicago's Ben Clifford says if users can break an application into separately schedulable, restartable, relocateable "application procedures," then they only need a tool to describe how the pieces connect, making the jobs easier to run.

View Full Article

Tuesday, December 2, 2008

Blog: A Human Approach to Computer Processing; "granular computing"

A Human Approach to Computer Processing
The University of Nottingham (12/02/08) De Cozar, Tara

University of Nottingham scientists are researching "granular computing," a computer paradigm that examines groups or sets of information, called information granules, instead of looking at each piece of information individually. Examining data in granules exposes new patterns and relationships, potentially leading to new types of computer modeling in a variety of fields. Nottingham professor Andrzej Bargiela says the granular approach to computing is inspired by the human thought process. "Creating abstractions from detailed information is essential to human knowledge, interaction, and reasoning," Bargiela says. "The human brain filters the flood of information and distils knowledge subconsciously." He says humans remember the message or purpose of information, not the specific details. For example, people remember conversations, but not every specific word, which would be the raw data in a computer system. Bargiela says a granular computing approach to information processing may lead to human-information processing technology, and could provide a breakthrough in dealing with information overload in a wide variety of applications. Several Nottingham Ph.D. projects explore the application of granular computing, including projects on urban traffic monitoring and control, job scheduling, time-tabling, and protein classification.

View Full Article

Blog: New Approach Eliminates Software Deadlocks Using Discrete Control Theory

New Approach Eliminates Software Deadlocks Using Discrete Control Theory
University of Michigan News Service (12/02/08) Moore, Nicole Casal

University of Michigan (UM) researchers have developed Gadara, a software controller that can anticipate and prevent program deadlocks. "Previously, engineers would try to identify potential deadlocks through testing or program analysis and then go back and rewrite the program," says UM professor Stephane Lafortune. "The bug fixes were manual, and not automatic. Gadara automates the process." Yin Wang, a doctoral student working with Lafortune, says that problems such as deadlocks usually need to be solved by the original programmer, but the goal of Gadara is to allow anyone to solve the problem. Deadlocks are becoming an increasingly pressing problem as multicore chips become more common and more complex, and as software programs perform many more tasks simultaneously. Gadara works by analyzing a program to find potential deadlocks, and inserting control logic into the program to ensure the program does not deadlock. Lafortune says Gadara uses a unique combination of discrete control theory and complier technology, which provides the logic and allows Gadara to use feedback to prevent deadlocks. The compiler technology, developed by UM professor Scott Mahlke, enables Gadara to operate on real-world applications.

View Full Article

Blog: They're Robots, but Not as We Know Them; the co-evolution of the brain and body in robots

They're Robots, but Not as We Know Them
Computerworld New Zealand (12/02/08) Hedquist, Ulrika

Neural networking was the focus of the 15th International Conference on Neuro-Information Processing in Auckland. Researchers discussed how a better understanding of the brain could lead to more intelligent computer systems. According to Nik Kasabov, director of the Knowledge Engineering and Discovery Research Institute at AUT University, neuro-information processing has real-life applications in medicine, cybersecurity, and intelligent robots. Researchers from the Okinawa Institute of Science and Technology in Japan showed off neuro-genetic robots. "Robots are now not only based on fixed rules about how to behave, they now have genes, similar to human genes, which affect their behavior, development and learning," said Kasabov. And researchers from the German Honda Research Institutes discussed the co-evolution of the brain and body in robots, and robots that can change their shape were also on display. "They can evolve, in a similar way as [humans] evolve," said Kasabov.

View Full Article

Blog Archive