Friday, May 30, 2008

Blog: 'Net Engineer Argues Firewalls Are a Security Distraction

'Net Engineer Argues Firewalls Are a Security Distraction
Computerworld Australia (05/30/08) Bell, Stephen

The focus on firewalls has led corporate network experts to spend less time on security in the end system, says Brian Carpenter, the former head of the Internet Engineering Task Force. Carpenter, currently a lecturer at the University of Auckland, discussed the history of the Internet as well as its challenges while giving the Institution of Engineering and Technology's annual Prestige lecture. During his "The Internet, where did it come from and where is it going?" address, Carpenter suggested that firewalls have lessened the momentum of end-to-end transparency for the Internet. He said the extended addressing scheme, IPv6, will replace the need for address translation, but Internet users are so used to conventional firewalls. There are some similarities between his view of end-to-end transfer of data and David Isenberg's concept of a "stupid" network, but he adds that the edge of today's complex networks might be difficult to define, which has also been suggested by Victoria University's John Hine. "The basic principle is still valid," Carpenter said. "It's not obvious that you will make money out of putting very complex services very deep in the network."
Click Here to View Full Article

Thursday, May 29, 2008

Blog: A Low-Cost Multitouch Screen

A Low-Cost Multitouch Screen
Technology Review (05/29/08) Greene, Kate

Microsoft recently demonstrated LaserTouch, a new multitouch platform that includes hardware that is inexpensive enough to retrofit any display into a touch screen. Microsoft believes that providing inexpensive hardware will make researchers more inclined to experiment with different form factors and develop multitouch software. LaserTouch uses a camera mounted on top of a computer display, with two infrared lasers with widespread beams at the corners of the display, essentially creating sheets of invisible light. When a finger touches the screen, it breaks the plane of light, which is detected by the camera. LaserTouch can be used on high-resolution displays designed for graphics applications such as photo and video editing, and because LaserTouch can be fitted to any type of display it could also be used for office applications such as presentations. In addition to Microsoft, Mitsubishi Electric Research Laboratories has developed a touch table for business collaborations, and Perceptive Pixel has a wall-sized touch screen that supports multiple inputs. Meanwhile, an open source touch-screen table is available to the public that allows individuals to assemble their own touch-screen tables.
Click Here to View Full Article

Blog: Monkeys Think, Moving Artificial Arm as Own

Monkeys Think, Moving Artificial Arm as Own
New York Times (05/29/08) P. A1; Carey, Benedict

University of Pittsburgh and Carnegie Mellon University brain-machine researchers have successfully implanted tiny sensors in two monkeys that enable them to control a mechanical arm using only their thoughts. The monkeys have been able to reach for and grab food and adjust for the size and stickiness of the food when necessary. The research suggests that brain-controlled prosthetics, while still impractical, are within reach. In previous studies, researchers demonstrated that paralyzed humans could learn to control a cursor on a computer screen with their brain waves, and that nonhuman primates could use their thoughts to move a mechanical arm, robot hand, or a robot on a treadmill. The new research takes the technology even further. The monkeys' brains seem to have adopted the mechanical appendage as part of the body, refining its movement as it interacted with objects in real time. Experts say the findings are likely to accelerate interest in human testing, particularly because of the need to treat head and spinal injuries in veterans. In the experiment, the monkeys first used a joystick to get a feel for the arm, which has a shoulder joint, an elbow, and a two-fingered grasping claw. Then a grid about the size of a freckle was implanted just beneath the monkeys' skulls on the motor cortex, over a patch of cells known to signal arm and hand movements. The grid contains 100 tiny electrodes, each one connecting to a single neuron. The grid was connected to a computer programmed to analyze the firing of the motor neurons and translate them into arm movements. The scientists helped the monkeys learn to use the arm using biofeedback, but after several days the monkeys needed no help.
Click Here to View Full Article
to the top

Wednesday, May 28, 2008

Blog: 2008 Godel Prize -Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time

ACM Group Honors Research Team for Helping Computers Solve Practical Problems
AScribe Newswire (05/28/08)

ACM's Special Interest Group on Algorithms and Computing Theory (SIGACT) has named Yale University professor Daniel A. Spielman and Boston University professor Shang-Hua Teng the winners of the 2008 Godel Prize. Their paper, "Smoothed Analysis of Algorithms: Why the Simplex Algorithm Usually Takes Polynomial Time," helped explain the effectiveness of algorithms on real data and real computers for solving business and other practical problems. Spielman and Teng introduced the Smoothed Analysis in 2001, and the technique has served a key role in research efforts since then. Their findings were published in the Journal of the ACM in 2004. SIGACT and the European Association for Theoretical Computer Science (EATCS) will present Spielman and Teng with the award for outstanding papers in theoretical computer science at the International Colloquium on Automata, Languages, and Programming (ICALP), which takes place July 6-13, in Reykjavik, Iceland. The prize comes with a $5,000 award.
Click Here to View Full Article

Monday, May 26, 2008

Security: How to Sell Security

How to Sell Security

Schneier on Security, May 26, 2008 at 05:57 AM

It's a truism in sales that it's easier to sell someone something he wants than something he wants to avoid. People are reluctant to buy insurance, or home security devices, or computer security anything. It's not they don't ever buy these things, but it's an uphill struggle.

The reason is psychological. And it's the same dynamic when it's a security vendor trying to sell its products or services, a CIO trying to convince senior management to invest in security or a security officer trying to implement a security policy with her company's employees.

It's also true that the better you understand your buyer, the better you can sell.

First, a bit about Prospect Theory, the underlying theory behind the newly popular field of behavioral economics. Prospect Theory was developed by Daniel Kahneman and Amos Tversky in 1979 (Kahneman went on to win a Nobel Prize for this and other similar work) to explain how people make trade-offs that involve risk. Before this work, economists had a model of "economic man," a rational being who makes trade-offs based on some logical calculation. Kahneman and Tversky showed that real people are far more subtle and ornery.

Here's an experiment that illustrates Prospect Theory. Take a roomful of subjects and divide them into two groups. Ask one group to choose between these two alternatives: a sure gain of $500 and 50 percent chance of gaining $1,000. Ask the other group to choose between these two alternatives: a sure loss of $500 and a 50 percent chance of losing $1,000.

These two trade-offs are very similar, and traditional economics predicts that the whether you're contemplating a gain or a loss doesn't make a difference: People make trade-offs based on a straightforward calculation of the relative outcome. Some people prefer sure things and others prefer to take chances. Whether the outcome is a gain or a loss doesn't affect the mathematics and therefore shouldn't affect the results. This is traditional economics, and it's called Utility Theory.

But Kahneman's and Tversky's experiments contradicted Utility Theory. When faced with a gain, about 85 percent of people chose the sure smaller gain over the risky larger gain. But when faced with a loss, about 70 percent chose the risky larger loss over the sure smaller loss.

This experiment, repeated again and again by many researchers, across ages, genders, cultures and even species, rocked economics, yielded the same result. Directly contradicting the traditional idea of "economic man," Prospect Theory recognizes that people have subjective values for gains and losses. We have evolved a cognitive bias: a pair of heuristics. One, a sure gain is better than a chance at a greater gain, or "A bird in the hand is worth two in the bush." And two, a sure loss is worse than a chance at a greater loss, or "Run away and live to fight another day." Of course, these are not rigid rules. Only a fool would take a sure $100 over a 50 percent chance at $1,000,000. But all things being equal, we tend to be risk-adverse when it comes to gains and risk-seeking when it comes to losses.

This cognitive bias is so powerful that it can lead to logically inconsistent results. Google the "Asian Disease Experiment" for an almost surreal example. Describing the same policy choice in different ways--either as "200 lives saved out of 600" or "400 lives lost out of 600"-- yields wildly different risk reactions.

Evolutionarily, the bias makes sense. It's a better survival strategy to accept small gains rather than risk them for larger ones, and to risk larger losses rather than accept smaller losses. Lions, for example, chase young or wounded wildebeests because the investment needed to kill them is lower. Mature and healthy prey would probably be more nutritious, but there's a risk of missing lunch entirely if it gets away. And a small meal will tide the lion over until another day. Getting through today is more important than the possibility of having food tomorrow. Similarly, it is better to risk a larger loss than to accept a smaller loss. Because animals tend to live on the razor's edge between starvation and reproduction, any loss of food -- whether small or large -- can be equally bad. Because both can result in death, and the best option is to risk everything for the chance at no loss at all.

How does Prospect Theory explain the difficulty of selling the prevention of a security breach? It's a choice between a small sure loss -- the cost of the security product -- and a large risky loss: for example, the results of an attack on one's network. Of course there's a lot more to the sale. The buyer has to be convinced that the product works, and he has to understand the threats against him and the risk that something bad will happen. But all things being equal, buyers would rather take the chance that the attack won't happen than suffer the sure loss that comes from purchasing the security product.

Security sellers know this, even if they don't understand why, and are continually trying to frame their products in positive results. That's why you see slogans with the basic message, "We take care of security so you can focus on your business," or carefully crafted ROI models that demonstrate how profitable a security purchase can be. But these never seem to work. Security is fundamentally a negative sell.

One solution is to stoke fear. Fear is a primal emotion, far older than our ability to calculate trade-offs. And when people are truly scared, they're willing to do almost anything to make that feeling go away; lots of other psychological research supports that. Any burglar alarm salesman will tell you that people buy only after they've been robbed, or after one of their neighbors has been robbed. And the fears stoked by 9/11, and the politics surrounding 9/11, have fueled an entire industry devoted to counterterrorism. When emotion takes over like that, people are much less likely to think rationally.

Though effective, fear mongering is not very ethical. The better solution is not to sell security directly, but to include it as part of a more general product or service. Your car comes with safety and security features built in; they're not sold separately. Same with your house. And it should be the same with computers and networks. Vendors need to build security into the products and services that customers actually want. CIOs should include security as an integral part of everything they budget for. Security shouldn't be a separate policy for employees to follow but part of overall IT policy.

Security is inherently about avoiding a negative, so you can never ignore the cognitive bias embedded so deeply in the human brain. But if you understand it, you have a better chance of overcoming it.

This essay originally appeared in CIO.

Posted on May 26, 2008 at 05:57 AM

Tuesday, May 20, 2008

Security: Alarming Open-Source Security Holes

Alarming Open-Source Security Holes
Technology Review (05/20/08) Garfinkel, Simson

An open-source programming error made in May 2006 that reduced the amount of randomness used to create cryptographic keys in the widely used OpenSSL library have created serious security vulnerabilities in at least four open-source operating systems, 25 applications programs, and millions of computer systems. Although the vulnerability was discovered on May 13 and a patch has been distributed, installing the patch does not repair damage to the compromised systems and some computers may be compromised even though they are not running the code. Modern computer systems use large numbers to generate keys that are used to encrypt and decrypt data sent over a network. The error reduces the number of different keys that Linux computers can generate to 32,767, making it significantly easier for hackers to guess the key. Moreover, keys created by the computers with the error are not fixed when the patch is installed. It's impossible to know how many computers are affected because vulnerable keys could have been transferred to non-open source systems if a file encrypted by the flawed system was transferred to another system. The error was made when programmers incorrectly used a tool that was intended to catch programming bugs that lead to security vulnerabilities. Programs that use OpenSSL include the Apache Web server, the SSH remote access program, the IPsec Virtual Private Network, secure email programs, and many others.
Click Here to View Full Article

Thursday, May 15, 2008

Security: DIY phishing kits introducing new features

DIY phishing kits introducing new features

Posted by Dancho Danchev, May 15th, 2008 @ 8:02 am

What are some of the main factors for the increase of phishing attacks, and their maturity from passive emails to blended threats attempting to not just steal personal information, but also infect with malware by embedding client-side vulnerabilities at the pages? It’s all a matter of perspective, which in this post will emphasize on the continuing efforts on behalf of phishers to innovate, and introduce new features within the most recently obtained do-it-yourself phishing page generators.

Software: Parallel Processing Calls for a Fortress Mentality

Parallel Processing Calls for a Fortress Mentality
InfoWorld (05/15/08) McAllister, Neil

Sun Microsystems' new Fortress programming language is designed to tackle the problems of applications development for high-performance computing. The problem with most programming languages is that they were designed for an earlier generation of machines, when processing resources were limited and desktop computers generally had only a single CPU. The amount of available processing power continues to increase, but the popular programming languages used today were not designed for the parallel-processing model. Fortress allows for language constructs such as for-next loops to be parallelizable by default. Fortress supports the concept of parallel transactions within the language itself, meaning that complex calculations can be computed as atomic units, independent of any other program threads that might be running. Fortress' syntax is also based on mathematical notation to assist developers in conceptualizing complex parallel-processing applications. So far, Fortress exists mostly on paper, though a reference interpreter that implements most of the core language features is available on the Fortress project community site.
Click Here to View Full Article

Tuesday, May 13, 2008

Research: Rensselaer Student Invents Alternative to Silicon Chip

Commencement 2008: Rensselaer Student Invents Alternative to Silicon Chip
Rensselaer Polytechnic Institute (05/13/08) DeMarco, Gabrielle

Recent Rensselaer Polytechnic Institute doctoral graduate Weixiao Huang has invented a new gallium nitride (GaN) transistor that could reduce power consumption and improve the efficiency of power electronics systems. "Silicon has been the workhorse in the semiconductor industry for last two decades," Huang says. "But as power electronics get more sophisticated and require higher performing transistors, engineers have been seeking an alternative like gallium nitride-based transistors that can perform better than silicon and in extreme conditions." Engineers have known that GaN and other gallium-based materials have electrical properties that are superior to silicon, but no useful GaN metal/oxide semiconductor (MOS) transistors had been developed. Huang's transistor, the world's first GaN MOS field-effect transistor (MOSFET), has already demonstrated world-record performance, Huang says. "If these new GaN transistors replaced many existing silicon MOSFETs in power electronics systems, there would be global reduction in fossil fuel consumption and pollution," Huang says.
Click Here to View Full Article

Friday, May 9, 2008

Software: Microsoft Grows DAISY for Blind Computer Users While Adobe Wilts

Microsoft Grows DAISY for Blind Computer Users While Adobe Wilts
Computerworld (05/09/08) Lai, Eric

Microsoft recently announced the availability of a plug-in that allows Word 2007, 2003, and XP users to easily save documents in the Digital Accessible Information SYstem (DAISY) XML format, which is the latest version of a standard developed by the nonprofit DAISY Consortium to be the most accessible format for visually impaired computer users. DAISY offers a considerably less frustrating experience for users than screen readers and text-to-speech tools, which miss invisible structural metadata embedded in the document (paragraph marks, table structures, headings, etc.) that represent the most important parts of a Web page because they are key to navigation, browsing, and searching. "From DAISY, you can easily move to other accessible formats, such as Braille or large print, in addition to audio, with little to no extra work," says Sam Ogami with the California State University system's chancellor's office. The DAISY Consortium also aims to help make documents and books accessible to the illiterate, dyslexic, or developmentally disabled, for which the plug-in could also prove helpful. President of the National Federation of the Blind in Computer Science Curtis Chong has high praise for the plug-in, but points to a gulf between its theoretical and its practical applications. Meanwhile, Jutta Treviranus with the University of Toronto's Adaptive Technology Resource Center noted in a 2008 paper that she harbors "grave concerns" with the DAISY XML that will be generated from a Word 2007 document because its native document format, Office Open XML (OOXML), breaks basic axioms such as not conflating stylistic metadata with structural metadata. Microsoft's Reed Shaffner says DAISY XML eventually may be ported to versions of OpenOffice.org that support OOXML. The DAISY plug-in is currently being hosted on SourceForge as an open-source project.
Click Here to View Full Article

Thursday, May 8, 2008

Web Software: When It's OK to Use Frames

When It's OK to Use Frames

Jakob Nielsen's Alertbox for December 1996

The main issue in using frames is to ensure that URLs keep working. To do so, all hypertext links must have a TARGET="_top" attribute in their anchor tag (e.g., <A HREF=foo.html TARGET="_top"> ). Adding the _top makes the browser clear out all the frames and replace the entire window with a new frameset. The destination frameset may well have many frames that are identical to the ones in the departure frameset and will be cached in the browser, but by forcing a complete reload in principle, the browser gets a new URL for the destination. This means that navigation actions (e.g., bookmarking) work again and that the URL is available for other people to link to.

The only exception from the need to use a TARGET="_top" attribute is when frames are used as a shortcut for scrolling within a single page. For example, a very long directory or other alphabetical listing could have a frame on top listing the letters of the alphabet. Clicking one of these letters would cause the listing to scroll within another frame while keeping the user on the same page and thus not destroying navigation.

Frames are also useful for "meta-pages" that comment on other pages. For example, a Web design styleguide may need to mix discussions of design principles with live examples of entire pages that follow (or break) the rules. In these cases, the embedded page should be treated as an embedded image (even though it is implemented as an independent page) and the "main" information that users will want to bookmark should be the content of the commenting frame.

Finally, it seems that the inline frames introduced in HTML 4.0 will be mostly harmless. A frame that is inlined will be subordinate to the main page, and the user can still bookmark the main page and navigate as usual. Since mainstream browsers still do not implement HTML 4.0, we don't know whether inline frames will have their own implementation problems: in particular, it is doubtful whether good ways will be found to print pages that have scrolling inline frames (my current best guess is that it will be best to print the currently visible part of a scrolling inline frame in order to maintain the layout of the main page, but some users may want to have the entire contents printed, so messy option settings may be necessary).

Full article

Web Software: Understanding and Applying --- Frames

Understanding and Applying Section 508 Standards
Using Frames:

(i) Frames shall be titled with text that facilitates frame identification and navigation.
Frames provide a means of visually dividing the computer screen into distinct areas that can be separately rewritten. Unfortunately, frames can also present difficulties for users with disabilities when those frames are not easily identifiable to assistive technology. For instance, a popular use of frames is to create "navigational bars" in a fixed position on the screen and have the content of the web site retrievable by activating one of those navigational buttons. The new content is displayed another area of the screen. Because the navigational bar doesn't change, it provides a stable "frame-of-reference" for users and makes navigation much easier. However, users with disabilities may become lost if the differences between the two frames are not clearly established.

While Section 508 guidelines require that frames be titled and labeled to identify the changing areas of information, the WAC recommends that web developers avoid using frames in most cases. Instead, templates can be developed to manage repeating information (such as background color, logos, and navigation menus) across the site.
Full Article

Monday, May 5, 2008

Security: Botnet Beaten, But Now What?

Botnet Beaten, But Now What?
eWeek (05/05/08) Vol. 25, No. 14, P. 13; Naraine, Ryan

TippingPoint Digital Vaccine Laboratories software security researchers Cody Pierce and Pedram Amini have devised a way to crack into the Kraken botnet by reverse-engineering the encryption routines and working out the communication structure between the botnet owner and the commandeered computers. "We basically have the ability to create a fake Kraken server capable of overtaking a redirected zombie," Pierce says. However, this breakthrough places TippingPoint in the middle of an ethical dilemma concerning whether compromised computers employed in denial-of-service attacks and spam runs should be purged without the permission of the systems' owners. Amini advocates this practice as a tool for impeding the botnet epidemic, arguing that "we never hear from the infected system again and neither can the actual botnet owner's command-and-control servers." Pierce agrees with Amini's argument, and supports an industry-wide dialogue on more proactive, vigilante-style anti-botnet tactics. Opposed is TippingPoint director of security research David Endler, who entertains the possibility that system cleansing without consent could endanger the operations of end-user systems with critical functions, such as life support. He notes that the issue of liability is one reason why TippingPoint decided not to modify an infected computer within the botnet.
Click Here to View Full Article

Saturday, May 3, 2008

Research: Extracting the Structure of Networks

Extracting the Structure of Networks
ZDNet (05/03/08) Piquepaille, Roland

Santa Fe Institute researchers Aaron Clauset, Cris Moore, and Mark Newman have developed an algorithmic method that enables the automatic extraction of the hierarchical structure of networks, and they say the results "suggest that hierarchy is a central organizing principle of complex networks, capable of offering insight into many network phenomena." The researchers suggest a direct yet flexible hierarchical structure paradigm that is applied to networks via machine learning and statistical physics tools. Analysis of networks from three distinct disciplines shows that hierarchical structures can predict missing network links with up to 80 percent precision, even in scenarios where only 50 percent of connections are exposed to the algorithm. The May 1 issue of Nature details Clauset, Moore, and Newman's work, and notes in the editor's summary that the data describing complex networks is frequently biased or incomplete. An accompanying article by Boston University's Sid Redner says that "focusing on the hierarchical structure inherent in social and biological networks might provide a smart way to find missing connections that are not revealed in the raw data--which could be useful in a range of contexts." The SFI researchers think that their algorithms are applicable to nearly all network categories, ranging from biochemical networks to social network communities.
Click Here to View Full Article

Thursday, May 1, 2008

Security: Digital Deception; computers solving CAPTCHAs

Digital Deception
Washington Post (05/01/08) P. D1; Whoriskey, Peter

Human-mimicking computers are becoming increasingly successful at solving CAPTCHA online tests intended to separate humans from computers. In April, Hotmail CAPTCHAs were broken by a computer. The computer then created numerous free Hotmail email accounts and sent out waves of spam, Websense says. Similar attacks occurred this year at Microsoft's Live Mail and Google's Gmail and Blogger. "What we're noticing over the last year is that these tests meant to tell the difference between a human and a computer are being targeted by more and more malicious groups," says Websense's Stephan Chenette. "And they are getting better at it." Solving CAPTCHAs with computers allows spammers to quickly create new email accounts to send spam, which Ferris Research estimates could cost the U.S. economy $42 billion annually. In addition to computers breaking CAPTCHAs, low-wage workers overseas are being paid to solve them. In fact, Google says it believes humans were involved in solving its CAPTCHAs. Microsoft and other Web companies say they are interested in developing human verification tests that are more difficult for computers to crack, but making the tests harder for a computer could make them harder for humans as well.
Click Here to View Full Article

Research: H.P. Reports Big Advance in Memory Chip Design

H.P. Reports Big Advance in Memory Chip Design
New York Times (05/01/08) P. C4; Markoff, John

Hewlett-Packard scientists have developed a memristor, an electrical resistor with memory properties that could be used to build very dense computer member chips that require far less power than DRAM memory chips. The memristor could also be used to create field programmable arrays. Meanwhile, memristors' ability to store and retrieve a variety of intermediate values, not just the binary 1s and 0s used in conventional chips, could enable them to function like biological synapses, which would make them ideal for artificial intelligence applications such as machine vision and understanding speech. Independent researchers say the memristor could quickly be applied to computer memory, but other applications could be more challenging. Hewlett-Packard's quantum science research group director R. Stanley Williams says the technology should be commercialized fairly quickly. The memristor was first predicted in 1971 by University of California, Berkeley electrical engineer Leon Chua, who says he had not worked on the idea for several decades and he was surprised when Hewlett-Packard contacted him a few months ago. The researchers have successfully created working circuits based on memristors that are as small as 15 nanometers, and Williams says it will eventually be possible to make memristors as small as about four nanometers. In comparison, the smallest components in today's semiconductors are 45 nanometers, and the industry does not see a way of shrinking silicon-based chips below about 20 nanometers.
Click Here to View Full Article

Blog Archive