Wednesday, January 28, 2009

Blog: Microsoft Releases 'Web Sandbox' as Open Source

Microsoft Releases 'Web Sandbox' as Open Source
InternetNews.com (01/28/09) Johnston, Stuart J.

Microsoft's Live Labs has released the source code for Web Sandbox, a technology that it hopes will make Web sites safer from attack. Web Sandbox walls off the various parts of a Web page--such as maps, visit counters, and affiliate programs that run scripts--from each other. This isolation is accomplished by virtualizing each of these components, which in turn places tighter controls on what the components can do to one another. Web Sandbox does not require browser add-ons or changes, and will work on most Web browsers that support JavaScript. Despite the release of the source code, Microsoft is advising developers not to build production Web sites with Web Sandbox since the technology is still under development. However, Microsoft is urging developers to test Web Sandbox by trying to break through its security so that its protection can be strengthened. Analysts say that vulnerabilities created by Web 2.0 mashups make technology such as Web Sandbox important. "There's a need for more Web standards and interoperability [driven by] the fact that things like cross-site scripting attacks are becoming more common," says Gartner's Ray Valdes.

View Full Article

Blog: Weizmann Institute Scientists Create Working Artificial Nerve Networks

Weizmann Institute Scientists Create Working Artificial Nerve Networks
Weizmann Institute of Science (01/28/09)

At the Weizmann Institute of Science, Physics of Complex Systems Department professor Elisha Moses and former research students Ofer Feinerman and Assaf Rotem have created logic gate circuits made from living nerve cells grown in a lab. The researchers say their work could lead to an interface that links the brain and artificial systems using nerve cells created for that purpose. The cells used in the circuits are brain nerve cells grown in culture. The researchers grew a model nerve network in a single direction by getting the neurons to grow along a groove etched in a glass plate. Nerve cells in the brain are connected to a vast number of other cells through axons, and must receive a minimum number of incoming signals before they relay the signal. The researchers found a threshold, about 100 axons, below which the chance of a response was questionable. The scientists then used two thin stripes of about 100 axons each to create an AND logic gate. "We have been able to enforce simplicity on an inherently complicated system. Now we can ask, 'What do nerve cells grown in culture require in order to be able to carry out complex calculations?' " Moses says. "As we find answers, we get closer to understanding the conditions needed for creating a synthetic, many-neuron 'thinking' apparatus."

View Full Article

Blog: Many Task Computing [MTC]: Bridging the Performance-Throughput Gap

Many Task Computing: Bridging the Performance-Throughput Gap
International Science Grid This Week (01/28/09) Raicu, Ioan; Foster, Ian; Zhao, Yong

Researchers from the University of Chicago, Argonne National Laboratory, and Microsoft have conceived of Many Task Computing (MTC), a methodology designed to tackle the kinds of applications not easily supported by clustered high-performance computing or high-throughput computing (HTC). MTC "involves applications with tasks that may be small or large, single or multiprocessor, compute-intensive or data-intensive," the researchers write. "The set of tasks may be static or dynamic, homogeneous or heterogeneous, and loosely- or tightly-coupled." MTC's distinction from HTC lies in the timescale of task completion and the fact that the nature of the applications is frequently data-intensive. Many resources are utilized over short intervals to perform many computational jobs, both dependent and independent. Loosely-coupled applications involved in MTC are communication-intensive but are not naturally represented through the use of the standard message-passing interface. Applications that run on or generate large data volumes cannot scale without sophisticated data management, which makes them organically complementary to MTC, the researchers say. They conclude that MTC's impact on science will be profound, noting that "we have demonstrated good support for MTC on a variety of resources from clusters, grids, and supercomputers through our work on Swift, a highly scalable scripting language/engine to manage procedures composed of many loosely-coupled components, and Falkon, a novel job management system designed to handle data-intensive applications with up to billions of jobs."

View Full Article

Tuesday, January 27, 2009

Blog: Game Provides Clue to Improving Remote Sensing

Game Provides Clue to Improving Remote Sensing
Duke University News & Communications (01/27/09) Merritt, Richard

Duke University researchers have developed an algorithm capable of determining the best strategy for winning a game of CLUE, a mathematical model that also could be used to help robotic mine sweepers find hidden explosives. Duke post-doctoral fellow Chenghui Cai says robotic sensors, like players in CLUE, take information from their surroundings to help the robot maneuver around obstacles and find its target. "The key to success, both for the CLUE player and the robots, is to not only take in the new information it discovers, but to use this new information to help guide its next move," Cai says. "This learning-adapting process continues until either the player has won the game, or the robot has found the mines." Artificial intelligence researchers call these situations "treasure hunt" problems, and have developed mathematical approaches to improving the chances of discovering the hidden treasure. Cai says the researchers found that players who implement the strategies based on the algorithm consistently outperform human players and other computer programs. Duke professor Silvia Ferrari, director of Duke's Laboratory for Intelligent Systems and Controls, says the algorithm is designed to maximize the ability to reach targets while minimizing the amount of movement.

Friday, January 23, 2009

Blog: New Insight Into How Bees See

New Insight Into How Bees See
Monash University (01/23/09) Blair, Samantha

Monash University bee researcher Adrian Dyer has made a discovery that could lead to improved facial-recognition systems: honeybees can learn to recognize human faces even when seen from different angles. "What we have shown is that the bee brain, which contains less than 1 million neurons, is actually very good at learning to master complex tasks," he says. "Computer and imaging technology programmers who are working on solving complex visual recognition tasks using minimal hardware resources will find this research useful." Dyer says bees use a mechanism of interpolating or image averaging previously seen views to recognize faces from new angles. His study found that the highly constrained neural resources of bees, which have brains only 0.01 percent the size of a human brain, have evolved so that they can process complex visual recognition tasks. "The relationships between different components of the object often dramatically change when viewed from different angles, but it is amazing to find the bees' brains have evolved clever mechanisms for problem solving which may help develop improved models for [artificial intelligence] face-recognition systems," Dyer says.

View Full Article

Blog: Fighting Malware: An Interview With Paul Ferguson

Fighting Malware: An Interview With Paul Ferguson
InfoWorld (01/23/09) Grimes, Roger A.

TrendMicro senior researcher Paul Ferguson says the sheer volume of malware today is incredible, and the real challenge is collecting data from as many points as possible and arranging the facts so that law enforcement can use that information as evidence. "The better job we can do collecting and normalizing the data up front, the easier it is to help law enforcement to get subpoenas and arrest warrants," Ferguson says. In Russia, Ukraine, and Eastern Europe, a few large organizations make the majority of the malware, though they pretend to be many small groups. Part of Ferguson's job involves correlating data to identify members of these groups through digital fingerprints. These groups generally use tried and true techniques. Their bots and worms are very similar and attacks often come from the same IP addresses, hosts, and DNS services. However, even these large groups use numerous freelance, low-level operators that provide specific skills. A major problem is that many of the larger players use policy holes to operate out in the open in countries like Russia where people such as Ferguson are powerless to stop them. Ferguson says much of the malware coming from China is actually from Russian groups that use the millions of unpatched PCs in China to launch attacks. He says most of the hacking in China, aside from the few professional criminal groups focusing on corporate espionage and the state-sponsored attacks on other governments, is actually social.

View Full Article

Tuesday, January 20, 2009

Blog: SANS Real-time Adaptive Security White Paper

SANS Real-time Adaptive Security White Paper

SANS NewsBites Vol. 11 Num. 5; 1/20/2008

Real-time Adaptive Security is the next step beyond an IPS implementation. It gives you full network visibility, provides context around events so you know which ones to investigate first, reduces your false positives dramatically, offers automated impact assessment, introduces automated IPS tuning, and more. Let SANS tell you how.

http://www.sans.org/info/37419

http://www.sourcefire.com/resources/downloads/secured/Sourcefire_RTA_wp.pdf

Saturday, January 17, 2009

Blog: Hot New Memory; computer circuits based on quantum packets of heat instead of electricity

Hot New Memory
Science News (01/17/09) Vol. 175, No. 2, P. 10; Barry, Patrick

Researchers say that computer circuits based on quantum packets of heat instead of electricity could use the heat generated by processors to perform computations and store information. Recent research into the physics of controlling the flow of heat packets has led to designs for heat-based diodes, transistors, and logic gates capable of performing "and," "or," and "not" operations. Baowen Li, a physicist at the National University of Singapore who designed the thermal memory with his colleague Lei Wang of the Renmin University of China in Beijing, says heat-based circuits could lead to a new science and technology in controlling heat flow. "This, we believe, will revolutionize our daily use of heat and can help human beings save energy and live in a more environmental world," Li says. The phonons in thermal circuits are discrete units of vibration in the atoms of a solid. The stronger the vibrations, the hotter the solid. In materials that conduct heat, phonons travel through the substance like electrons travel through electrical conductors. Li and Wang did not build an actual heat-based memory device. Instead, the researchers used computer simulations and theoretical calculations to prove that such a device is physically possible. Concentrated heat tends to dissipate over time, indicating that heat-based memory would be impossible, but Li and Wang showed that, under certain conditions, information stored as phonons can be preserved.

View Full Article

Thursday, January 15, 2009

Blog: How One Company Cleaned Up The Thumb Drive Attacks- And Learned A Lot In The Process.

How One Company Cleaned Up The Thumb Drive Attacks- And Learned A Lot In The Process.

SANS NewsBites Vol. 11 Num. 3; 1/15/2009

Here's the email I got in answer to "Why Did You Send People To SANS This Year When You Have A Ban On Training and Travel?"

Alan,

Take a closer look; you'll find that 12 or 13 people are coming from (company) to SANS in Orlando, not just my three. The others are coming from other divisions. Here's why. You remember the big wave of attacks last November where infections were spread by thumb drives. We got hit by that. It is amazing how often people use those things. It spread to dozens of Windows file servers, and from there jumped to thousands of workstation systems. Clogged our networks. It was so bad a lot of machines, including the ones on the top floor of this building, had to be taken off line - and that got some unwanted visibility from the CEO.

We called both our AV vendors but neither had a signature for this virus yet. It took a long time and a lot of pain before we found all the machines that were hit, stop the spread to new machines, and got rid if the (expletive deleted) thing. The whole company - every US division and international.

So what does that have to do with my guys going to SANS? It turns out our CEO was in the UK visiting our facility there and somehow the topic of the virus came up and our UK manager told him it had hardly been aproblem at all in the UK. He said his security guys found it within afew minutes and cleaned it out. As you might imagine the CEO's follow-up email to me was unpleasant. So I called my counterpart in the UK andasked him how he had dealt with the attack so easily. He told me one of his guys knew what to do immediately. He said used the built-in Windows WMIC command to find systems with the malware processes running and thatalso told him about the changes made by the malware. Then, he used thereg command to remove an entry from the auto-start capabilities ofinfected machines to stop the malware from running on startup. He also said the reg command let him change the USB and CD/DVD autorun function to stop similar infections. After shutting down the malware and stopping it from spreading, he said he used a couple more techniques to clean up the infected machines quickly. I asked where his guy learned all that. He said at SANS, in a course called 504 which I later learned was your Hacker Exploits and incident Handling class. I reported that back to our CEO. He told me to make sure every division had at least two people who knew those techniques. So, our travel ban was lifted for SANS.

==end==

Blog: How We Are Tricked Into Giving Away Our Personal Information

How We Are Tricked Into Giving Away Our Personal Information
Swedish Research Council (01/15/09)

Organizations are poorly equipped to prevent attacks that target human error and weaknesses, says Stockholm University's Marcus Nohlberg, who says social engineering attacks have received little attention from researchers. Nohlberg's research has led to a more thorough understanding of the methods attackers use and what makes people and organizations vulnerable. He says the biggest problem is that information and proper training is not an effective deterrent. "There will always be a small group of people who do not do as they were taught," Nohlberg says. "The best thing is practical training, and it's probable that organizations will need to start running internal checks where they in fact create fictitious attacks in order to identify weaknesses." Social engineering is more expensive to the attacker, as it requires commitment and time, but software and technologies already exist that can interact with people automatically. Nohlberg warns of a time when programs target victims through digital forums such as Facebook, making social engineering attacks as easy and inexpensive as sending spam.

View Full Article

Wednesday, January 14, 2009

Blog: NIST Draft Publication Offers Guidelines for Safeguarding Personal Data

NIST Draft Publication Offers Guidelines for Safeguarding Personal Data

SANS NewsBites Vol. 11 Num. 4; 1/16/2009 (January 14, 2009)

The National Institute of Standards and Technology (NIST) has released a draft of Special Publication 800-122, "Guide to Protecting the Confidentiality of Personally Identifiable Information," to help government agencies decide how to best protect the information they retain. NIST makes several recommendations, including identifying and categorizing all personally identifiable information (PII) that the organization retains; limiting data retention to only what is necessary; applying a risk-based approach to data protection; and creating and implementing an incident response plan for breaches of PII. NIST is accepting public comment on the draft document through March 13, 2009.

http://gcn.com/Articles/2009/01/14/NIST-on-securing-personal-data.aspx?Page=2

http://csrc.nist.gov/publications/drafts/800-122/Draft-SP800-122.pdf

[Editor's Note (Northcutt): I am a big fan of NIST and if you can take a few minutes to read the draft and comment, broad input helps make the final work better. I think the title is wrong, however, there is less "protection" explained than "identification." They have a nice section on incident response for privacy incidents (section 5). There is a line in that section that government folks need to be aware of: PII incidents should be reported to US CERT within an hour. They also mention the OECD guidelines in Appendix D. To this day, the OECD guidelines seem to be the clearest, most well thought out guidance on privacy I have seen.]

Tuesday, January 13, 2009

Blog: More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows

More Chip Cores Can Mean Slower Supercomputing, Sandia Simulation Shows
Sandia National Laboratories (01/13/09) Singer, Neal

Simulations at Sandia National Laboratory have shown that increasing the number of processor cores on individual chips may actually worsen the performance of many complex applications. The Sandia researchers simulated key algorithms for deriving knowledge from large data sets, which revealed a significant increase in speed when switching from two to four multicores, an insignificant increase from four to eight multicores, and a decrease in speed when using more than eight multicores. The researchers found that 16 multicores were barely able to perform as well as two multicores, and using more than 16 multicores caused a sharp decline as additional cores were added. The drop in performance is caused by a lack of memory bandwidth and a contention between processors over the memory bus available to each processor. The lack of immediate access to individualized memory caches slows the process down once the number of cores exceeds eight, according to the simulation of high-performance computing by Sandia researchers Richard Murphy, Arun Rodrigues, and Megan Vance. "The bottleneck now is getting the data off the chip to or from memory or the network," Rodrigues says. The challenge of boosting chip performance while limiting power consumption and excessive heat continues to vex researchers. Sandia and Oak Ridge National Laboratory researchers are attempting to solve the problem using message-passage programs. Their joint effort, the Institute for Advanced Architectures, is working toward exaflop computing and may help solve the multichip problem.

View Full Article

Monday, January 12, 2009

Blog: Ruby on Rails on Track for Major Upgrades

Ruby on Rails on Track for Major Upgrades
InfoWorld (01/12/09) Krill, Paul

Ruby on Rails is expected to undergo significant changes in 2009, including an upgrade in January that will feature several enhancements and a merger with the Merb Web framework later in the year. The 2.3 release of the open source Rails framework features performance optimizations, customizable templates, memory savings, and the ability to write the most performance-dependent parts in Ruby. The update also will feature HTTP Digest Authentication, an API for authentication. Version 3.0 of Rails, which is expected sometime around May, will merge Rails with Merb, and the 2.3 release will serve as a precursor to new version. For example, the respond_to block capability in Rails, which allows an application to respond to a single request with HTML, XML, or JavaScript, is 8 percent faster in Rails 2.3, which was made possible by Yehuda Katz, a new Rails core team member who was previously at Merb. Rails 2.3 also features a new templates capability that enables the creation of templates already fitted with specific capabilities such as plug-ins. Rails 3.0 will contain several ideas from Merb, such as framework agnosticism that will work with Rails' emphasis on strong defaults, and routing for mapping browser requests.

View Full Article

Blog: Group Details 25 Most Dangerous Coding Errors Hackers Exploit

Group Details 25 Most Dangerous Coding Errors Hackers Exploit
Computerworld (01/12/09) Vijayan, Jaikumar

A group of 35 high-profile organizations, including the U.S. Department of Homeland Security and the National Security Agency's Information Assurance Division, has released a list of the 25 most serious programming errors. The goal is to focus attention on dangerous software-development practices and ways to avoid those practices, according to officials at the SANS Institute, which coordinated the list's creation. Releasing the list is intended to give software buyers, developers, and training programs a tool to identify programming errors known to create serious security risks. The list will be adjusted as necessary to accommodate new or particularly dangerous programming errors that might arise. The list is divided into three classes. Nine errors on the list are categorized as insecure interactions between components, another nine are classified as risky resource management errors, and the rest are considered "porous defense" problems. The top two problems are improper input validation and improper output encoding errors, which are regularly made by numerous programmers and are believed to be responsible for the attacks that compromised hundreds of thousands of Web pages and databases in 2008. Other programming errors include a failure to preserve SQL query, Web page structures leading to SQL injection attacks, cross-site scripting vulnerabilities, buffer-overflow mistakes, and chatter error messages.

View Full Article

Thursday, January 8, 2009

Blog: Billion-Point Computing for Computers

Billion-Point Computing for Computers
UC Davis News and Information (01/08/09) Greensfelder, Liese

Researchers at the University of California, Davis (UC Davis) and Lawrence Livermore National Laboratory have developed an algorithm that will enable scientists to extract features and patterns from extremely large data sets. The algorithm has already been used to analyze and create images of flame surfaces, search for clusters and voids in a virtual universe experiment, and identify and track pockets of fluid in a simulated mixing of two fluids, which generated more than a billion data points on a three-dimensional grid. "What we've developed is a workable system of handling any data in any dimension," says UC Davis computer scientist Attila Gyulassy, who led the five-year development effort. "We expect this algorithm will become an integral part of a scientist's toolbox to answer questions about data." As scientific simulations have become increasingly complex, the data generated by these experiments has grown exponentially, making analyzing the data more challenging. One mathematical tool to extract and visualize useful features in data sets, called the Morse-Smale complex, has existed for nearly 40 years. The Morse-Smale complex partitions sets by similarity of features and encodes them into mathematical terms, but using it for practical applications is extremely difficult, Gyulassy says. The new algorithm divides data sets into parcels of cells and analyzes each parcel separately using the Morse-Smale complex. The results are then merged together, and as new parcels are created from merged parcels, they are analyzed and merged again. With each step, data that does not need to be stored in memory can be discarded, significantly reducing the computational power needed to run the calculations.

View Full Article

Tuesday, January 6, 2009

Blog: What Will Change Everything? Ask a Computer Scientist

What Will Change Everything? Ask a Computer Scientist
ITworldcanada.com (01/06/09) Schick, Shane

John Brockman's Edge.org Web site recently posed the question "What will change everything?" to a group of academics. The answer for computer scientist Roger Schank is a machine that provides knowledge as needed. Schank says information in enterprise databases or on personal computers should find us, rather than having people constantly search for it. Schank views information as stories rather than content, and envisions a future of just-in-time storytelling. "To put this another way, an archive of key strategic ideas about how to achieve goals under certain conditions is just the right resource to be interacting with enabling a good story to pop up when you need it," Schank says. He says goal-directed indexing is about organizing information so that it can be cross-referenced the next time an example of what users need comes up, and in the context of a story that users will understand or remember. Schank says researchers should begin to focus on how to monitor user behavior so that machines can understand their goals and index information appropriately. "We will all become much more likely to profit from humanity's collective wisdom by having a computer at the ready to help us think," he says.

View Full Article

Monday, January 5, 2009

Blog: MD5 Hash Algorithm Flaw Allows Fraudulent Certificates

MD5 Hash Algorithm Flaw Allows Fraudulent Certificates (December 30 & 31, 2008 & January 5, 2009)

SANS NewsBites Vol. 11 Num. 1; 1/6/2009

A vulnerability in the MD5 hash algorithm used to generate digital certificates could allow cyber criminals to generate fraudulent certificates. The phony certificates could be used to create phishing sites that would appear to browsers to be legitimate. The problem was the subject of a presentation at the chaos Communications Conference in Berlin last month. Certificate authorities that use MD5 hashes should change to SHA1 hashes to protect their certificates' integrity. A number of certificate authorities are still are using MD5, and some estimates say that 14 percent of all websites are using certificates generated with MD5.

http://isc.sans.org/diary.html?storyid=5590&rss

http://gcn.com/Articles/2008/12/31/SSL-certs-busted.aspx?p=1

http://www.securityfocus.com/news/11541

http://www.heise-online.co.uk/security/25C3-MD5-collisions-crack-CA-certificate--/news/112327

http://www.securityfocus.com/brief/880

[Editor's Note (Honan): This attack should not come as a major surprise as weaknesses in the MD5 hash algorithm have been known since 2004. The SANS Internet Storm Center has a good write up of the issue with a list of vendor statements regarding the status of their certificates at

http://isc.sans.org/diary.html?storyid=5590.

You can also use this site http://www.networking4all.com/nl/helpdesk/tools/site+check/ to check what SSL certificates are being used by a site you are visiting.]

Blog: MIT Professor Creates Software to Organize the Details of Everyday Life; List.it

MIT Professor Creates Software to Organize the Details of Everyday Life
Campus Technology (01/05/09) Schaffhauser, Dian

The computer can be a better tool for creating to-do lists and jotting down other information, says Massachusetts Institute of Technology (MIT) professor David Karger. Karger, a member of the MIT Computer Science and Artificial Intelligence Lab, has created List.it, Web-based note-taking software that makes it easier for people to write down short notes and find them later. People ultimately will spend less time entering, storing, and retrieving information, whether email addresses, Web URLs, or shopping lists, using List.it, Karger says. List.it is available on the Firefox browser sidebar. Users can enter information on the fly via the quick input box. A synching feature provides back up for notes, and installing List.it on multiple computers mirrors notes to all of the machines. "I would never make the claim that we're trying to replace Post-its," says Michael Bernstein, a graduate student in Karger's lab. "We want to understand the classes of things people do with Post-its and see if we can help users do more of what they wanted to do in the first place."

View Full Article

Thursday, January 1, 2009

Blog: Web 3.0 Emerging

Web 3.0 Emerging
Computer (01/09) Vol. 42, No. 1, P. 88; Hendler, Jim

Web 3.0 is generally defined as Semantic Web technologies that run or are embedded within large-scale Web applications, writes Jim Hendler, assistant dean for information technology at Rensselaer Polytechnic Institute. He points out that 2008 was a good year for Web 3.0, based on the healthy level of investment in Web 3.0 projects, the focus on Web 3.0 at various conferences and events, and the migration of new technologies from academia to startups. Hendler says the past year has seen a clarification of emerging Web 3.0 applications. "Key enablers are a maturing infrastructure for integrating Web data resources and the increased use of and support for the languages developed in the World Wide Web Consortium (W3C) Semantic Web Activity," he observes. The application of Web 3.0 technologies, in combination with the Web frameworks that run the Web 2.0 applications, are becoming the benchmark of the Web 3.0 generation, Hendler says. The Resource Description Framework (RDF) serves as the foundation of Web 3.0 applications, which links data from multiple Web sites or databases. Following the data's rendering in RDF, the development of multisite mashups is affected by the use of uniform resource identifiers (URIs) for blending and mapping data from different resources. Relationships between data in different applications or in different parts of the same application can be deduced through the RDF Schema and the Web Ontology Language, facilitating the linkage of different datasets via direct assertions. Hendler writes that a key dissimilarity between Web 3.0 technologies and artificial intelligence knowledge representation applications resides in the Web naming scheme supplied by URIs combined with the inferencing in Web 3.0 applications, which supports the generation of large graphs that can prop up large-scale Web applications.
View Full Article - Link to Publication Homepage

Blog Archive