Tuesday, December 30, 2008

Blog: Experts Uncover Weakness in Internet Security

Experts Uncover Weakness in Internet Security
Ecole Polytechnique Federale de Lausanne (12/30/08) Luy, Florence

Security researchers in Europe and California have discovered a vulnerability in the Internet digital certificate infrastructure that could allow attackers to forge certificates that are trusted by all common Web browsers. The weakness makes it possible to impersonate secure Web sites and email servers to perform undetectable phishing attacks. Whenever a small padlock appears in a browser window, the Web site being visited is secured using a digital certificate from a Certification Authority (CA). To ensure the certificate is authentic, the browser verifies the signature using cryptographic algorithms. The researchers discovered that one of these algorithms, known as MD5, can be misused. The first known flaw in the MD5 algorithm was presented in 2004 at the annual Crypto cryptography conference by Chinese researchers, who performed a collision attack and created two different messages with the same digital signature. The initial attack was severely limited, but a much stronger collision attack has been found by the European and California researchers. The new method proves it is possible to create a rogue CA that is trusted by all major Web browsers. A rogue CA, combined with a known vulnerability in the Domain Name System protocol, could allow attackers to launch virtually undetectable phishing attacks. The researchers say MD5 can no longer be trusted as a secure cryptographic algorithm for use in digital signatures and certificates. Arjen Lenstra, head of EPFL's Laboratory for Cryptologic Algorithms, says the developers of the major Internet browsers have been informed of the vulnerability.

View Full Article

Friday, December 19, 2008

Blog: Computing in a Molecule; designing a logic gate with 30 atoms

Computing in a Molecule
ICT Results (12/19/08)

Scientists from 15 European academic and industrial research institutions are working on the European Union-funded Pico-Inside project, which was established to develop a molecular replacement for transistors. The researchers, led by Christian Joachim of the Centre for Material Elaboration & Structural Studies at the French National Scientific Research Centre, say the use of molecular-sized computer components could lead to atomic-scale computing. Joachim notes that nanotechnology focuses on shrinking parts down to the smallest size possible, while the Pico-Inside team is working from the opposite end by starting with the atom and determining if such a small bit of matter can be used to create a logic gate, memory source, or other component. "The question we have asked ourselves is how many atoms does it take to build a computer?" he says. "That is something we cannot answer at present, but we are getting a better idea about it." So far, the researchers have designed a logic gate with 30 atoms that performs the same task as 14 transistors. The researchers also have explored the architecture, technology, and chemistry needed to achieve computing at the molecular scale and to interconnect molecules. Project researchers have focused on two architectures, one that mimics the classical design of a logic gate, and another, more complex process that relies on changes to the molecule's conformation to execute logic-gate inputs and quantum mechanics to perform the computation.

View Full Article

Blog: Plugging a Password Leak [by web browsers]

Plugging a Password Leak
Technology Review (12/19/08) Kremen, Rachel

Researchers from Harvard University's Center for Research on Computation and Society, the University of California, Berkeley, and Stanford University have improved the security of browser-based automatic log-in procedures. The researchers focused on password managers created with browser bookmarklets that use JavaScript to automatically log in a user to a Web site. The researchers identified a major flaw in bookmarklets in which an attack could trick bookmarklets into revealing all of a user's passwords. Bookmarklet-based password managers generally store passwords on a central server, and when a user visits one of those sites the user is automatically logged in. However, the researchers found that bookmarklets could not be trusted to know what Web site the user was actually visiting, meaning a few lines of code would be enough to trick the system into logging into a malicious site. The researchers found a solution that checks the referrer header instead of checking a browser window's location. The improved bookmarklet uses the secure socket layer (SSL) data transfer protocol to prevent the header from being easily forged. The researchers say that in the future a new browser feature called postMessage will enable browser windows to securely transfer information back and forth, providing even better security than the SSL solution.

View Full Article

Tuesday, December 16, 2008

Blog: Dartmouth Researchers Develop Computational Tool to Untangle Complex Data

Dartmouth Researchers Develop Computational Tool to Untangle Complex Data
Dartmouth News (12/16/08) Knapp, Susan

Dartmouth College researchers have developed the Partition Decoupling Method (PDM), a mathematical tool that can be used to untangle the underlying structure of time-dependent, interrelated, complex data. "With respect to the equities market, we created a map that illustrated a generalized notion of sector and industry, as well as the interactions between them, reflecting the different levels of capital flow, among and between companies, industries, sectors, and so forth," says Dartmouth professor Daniel Rockmore, who led the development effort. "In fact, it is this idea of flow, be it capital, oxygenated blood, or political orientation, that we are capturing." Capturing flow patterns is critical to understanding the subtle interdependencies found in the different components of complex systems. Analysis of the network of correlations is done using spectral analysis. The analysis is combined with statistical learning tools to uncover regions where the flow circulates more than would be expected at random. The result creates a detailed analysis of the interrelations and provides a wide view of the coarse-scale flow as a whole. Rockmore says PDM uses a different approach than similar programs designed to find how complex systems behave, and because it is not strictly hierarchical, PDM does not constrain interconnectivity.

View Full Article

Blog: Dynamic Language Use Pops in Asia

Dynamic Language Use Pops in Asia
eWeek (12/16/08) Taft, Darryl K.

Dynamic programming languages such as PHP, Perl, JavaScript, Ruby, and Python have caught on in a big way in Asia, according to an Evans Data survey of more than 400 software developers in the Asia-Pacific region. Evans Data found that 88 percent of developers use dynamic languages some of the time, and more than 40 percent said they use one more than half of the time. Most Asian developers use JavaScript, but PHP also is used in some of the projects of 45 percent of developers. Overall, the use of dynamic languages is likely to remain the same in 2009, but the use of Perl could decline while the use of ActionScript rises. "Software developers are always looking for ways to shed unneeded complexity and outdated methodologies and move to approaches that make programming simpler and faster, especially as more and more development is Web-centric," says Evans Data CEO John Andrews. "The high use of dynamic languages in Asia Pacific is consistent with the high concentration of Web application development being conducted in that region." The study also found that more than 20 percent of developers plan to launch cloud projects within the next six months, and 60 percent expect their development for devices to increase.

View Full Article

Friday, December 12, 2008

Blog: The many-core performance wall; the "memory wall"

The many-core performance wall

Posted by Robin Harris; December 11th, 2008 @ 7:01 am

Many-core chips are the great hope for more performance but Sandia National Lab simulations show they are about to hit a memory wall. How bad is it?

Thursday, December 11, 2008

Blog: W3C Upgrades Web Accessibility Standards; WCAG 2.0

W3C Upgrades Web Accessibility Standards
InternetNews.com (12/11/08) Adhikari, Richard

The World Wide Web Consortium (W3C) has released version 2.0 of its Web Content Accessibility Guidelines (WCAG), which are designed to help developers make Web sites and online content easier to access for users with disabilities. W3C plans to make WCAG 2.0 an international standard. "There were a lot of different guidelines for accessibility developed by different countries and organizations, so in WCAG 2.0 we had this huge effort to develop something that would work well across all these different needs and become like a unified international standard," says W3C Web Accessibility Initiative director Judy Brewer. "There's been considerable international interest in accepting this as a standard." So far, WCAG 2.0 has received support from Adobe, IBM, Boeing, Microsoft, the Chinese and Japanese governments, and the European Commission for Information Society and Media. WCAG 2.0 offers several improvements over the previous version, including support for new Web technologies. "WCAG 1.0 was specifically for HTML, but the Web uses other technologies now, and we wanted an updated standard that would cover any technologies and also give developers more flexibility," Brewer says.

View Full Article

Blog: Firefox Tops List of Most Known Vulnerabilities in Applications

SANS NewsBites Vol. 10 Num. 97

Fri, 12 Dec 2008

--Firefox Tops List of Most Known Vulnerabilities in Applications (December 11, 2008)

Whitelisting company Bit9 has compiled statistics on the applications with the most security vulnerabilities reported over the last year.

Mozilla's Firefox web browser versions 2 and 3 top the list with 40 reported flaws. Adobe Acrobat versions 8.1.1 and 8.1.2 follow with 31 reported flaws. Windows Live (MSN) Messenger versions 4.7 and 5.1 came in third with 19 flaws. Fourth and fifth place were taken by Apple iTunes versions 3.2 and 3.1.2 and Skype version 3.5.0.248, respectively.

http://www.vnunet.com/vnunet/news/2232492/firefox-tops-app-vulnerability

http://www.bit9.com/news-events/press-release-details.php?id=102

Wednesday, December 10, 2008

Blog: Mind-Controlled Robotic Limbs Become the Ants-Pants

Mind-Controlled Robotic Limbs Become the Ants-Pants
Computerworld Australia (12/10/08) Edwards, Katheryn

University of Technology Sydney (UTS) researchers have developed prosthetic limbs that respond to brain signals by mimicking the nonelectric signals used by the central nervous system to control muscle activity. Artificial intelligence researchers used the complex interactions between ants to develop a pattern recognition formula to identify bioelectric signals that can be used in human trials. The behavior of social insects such as ants helps scientists understand the body's electrical signals enabling them to create a robotic prosthesis that can be operated by human thought, says UTS Ph.D. student Rami Khushaba. Khushaba is developing a mathematical basis for identifying which biosignals relate to particular arm movements and where electrodes should be placed to capture those signals. Nature's abundance of swarm intelligence algorithms was a major reason for using them in the development of the prosthesis, along with their use of multi-agent techniques to solve specific problems. "We can use the behavior of the ants to enhance the quality of the control systems that we employ with the robotic limbs," Khushaba says. The researchers create a map of the voluntary intent of the central nervous system by attaching sensors to the limb following an amputation to record an electromyogram. Only a few seconds of data is needed to train the system to identify patterns in the raw data during the online testing phase. Khushaba says the biggest challenge to the system's success will be maintaining speed and accuracy.

View Full Article

Monday, December 8, 2008

Blog: Microsoft develops open-source content-management system

Microsoft develops open-source content-management system

Mary Jo Foley, December 8th, 2008 @ 12:33 pm

Microsoft has developed and released via its CodePlex site an alpha version of a new open-source content- management system, codenamed "Oxite." Microsoft is positioning Oxite as more than just a blogging engine, claiming it can support even large Web sites and is customizable.

READ FULL STORY

Saturday, December 6, 2008

Blog: Thieves Winning Online War, Maybe Even in Your Computer

Thieves Winning Online War, Maybe Even in Your Computer
New York Times (12/06/08) P. A1; Markoff, John

Malware continues to overcome security professionals' efforts to defend against it. "Right now the bad guys are improving more quickly than the good guys," says SRI International's Patrick Lincoln. As businesses and individuals become increasingly involved in online communities, cybercriminals are given more opportunities to infect machines and commit crimes. The Organization for Security and Cooperation in Europe estimates that credit card thefts, bank fraud, and other online scams rob computer users of $100 billion annually. In late October, the RSA FraudAction Research Lab discovered a cache of 500,000 credit-card numbers and bank account log-ins that were stolen by a network of zombie computers run by an online gang. "Modern worms are stealthier and they are professionally written," says British Telecom chief security technology officer Bruce Schneier. "The criminals have gone upmarket, and they're organized and international because there is real money to be made." Meanwhile, malicious programs are becoming increasingly sophisticated, with some programs searching for the most recent documents on the assumption that they are the most valuable and others stealing log-in and password information for consumer finances. Microsoft researchers recently discovered malware that runs Windows Update after it infects a machine to ensure the machine is protected from other pieces of malware. Purdue University computer scientist Eugene Spafford is concerned that companies will cut back on computer security to save money. "In many respects, we are probably worse off than we were 20 years ago," he says, "because all of the money has been devoted to patching the current problem rather than investing in the redesign of our infrastructure."

View Full Article

Wednesday, December 3, 2008

Blog: How to Run a Million Jobs; megajobs, processes that involve thousands to millions of similar or identical, though still independent, jobs using different processors

How to Run a Million Jobs
International Science Grid This Week (12/03/08) Heavey, Anne; Williamson, Amelia; Abramson, David

Experts at the recent SC08 conference held a session to discuss emerging solutions for dealing with the challenges of running megajobs, processes that involve thousands to millions of similar or identical, though still independent, jobs using different processors. Researchers want to be able to easily specify and manage such tasks, and to readily identify successful and failed jobs. The University of Chicago's Ben Clifford says that as tools and resources change, people describe their computing jobs differently. Some established job management solutions contain a variety of features, but they tend to have a high overhead in scheduling and they are inefficient at executing many short jobs on numerous processors. Other systems are designed specifically for the data-intensive, loosely coupled, high-throughput computing grid model, which works well for many thousands of jobs, both short and long. Ioan Raicu and Ian Foster, both from the University of Chicago and Argonne National Laboratory, have designed a class of applications called Many Tasks Computing (MTC), which is an application composed of many tasks, both independent and dependent, that are "communication-intensive but not naturally expressed in Message Passing Interface," Foster says. Unlike high throughput computing, MTC uses numerous computing resources over short periods of time to process tasks. Some computer systems are being altered to run megajobs, including IBM's new throughput, grid-style mode on the Blue Gene/P supercomputer. The University of Chicago's Ben Clifford says if users can break an application into separately schedulable, restartable, relocateable "application procedures," then they only need a tool to describe how the pieces connect, making the jobs easier to run.

View Full Article

Tuesday, December 2, 2008

Blog: A Human Approach to Computer Processing; "granular computing"

A Human Approach to Computer Processing
The University of Nottingham (12/02/08) De Cozar, Tara

University of Nottingham scientists are researching "granular computing," a computer paradigm that examines groups or sets of information, called information granules, instead of looking at each piece of information individually. Examining data in granules exposes new patterns and relationships, potentially leading to new types of computer modeling in a variety of fields. Nottingham professor Andrzej Bargiela says the granular approach to computing is inspired by the human thought process. "Creating abstractions from detailed information is essential to human knowledge, interaction, and reasoning," Bargiela says. "The human brain filters the flood of information and distils knowledge subconsciously." He says humans remember the message or purpose of information, not the specific details. For example, people remember conversations, but not every specific word, which would be the raw data in a computer system. Bargiela says a granular computing approach to information processing may lead to human-information processing technology, and could provide a breakthrough in dealing with information overload in a wide variety of applications. Several Nottingham Ph.D. projects explore the application of granular computing, including projects on urban traffic monitoring and control, job scheduling, time-tabling, and protein classification.

View Full Article

Blog: New Approach Eliminates Software Deadlocks Using Discrete Control Theory

New Approach Eliminates Software Deadlocks Using Discrete Control Theory
University of Michigan News Service (12/02/08) Moore, Nicole Casal

University of Michigan (UM) researchers have developed Gadara, a software controller that can anticipate and prevent program deadlocks. "Previously, engineers would try to identify potential deadlocks through testing or program analysis and then go back and rewrite the program," says UM professor Stephane Lafortune. "The bug fixes were manual, and not automatic. Gadara automates the process." Yin Wang, a doctoral student working with Lafortune, says that problems such as deadlocks usually need to be solved by the original programmer, but the goal of Gadara is to allow anyone to solve the problem. Deadlocks are becoming an increasingly pressing problem as multicore chips become more common and more complex, and as software programs perform many more tasks simultaneously. Gadara works by analyzing a program to find potential deadlocks, and inserting control logic into the program to ensure the program does not deadlock. Lafortune says Gadara uses a unique combination of discrete control theory and complier technology, which provides the logic and allows Gadara to use feedback to prevent deadlocks. The compiler technology, developed by UM professor Scott Mahlke, enables Gadara to operate on real-world applications.

View Full Article

Blog: They're Robots, but Not as We Know Them; the co-evolution of the brain and body in robots

They're Robots, but Not as We Know Them
Computerworld New Zealand (12/02/08) Hedquist, Ulrika

Neural networking was the focus of the 15th International Conference on Neuro-Information Processing in Auckland. Researchers discussed how a better understanding of the brain could lead to more intelligent computer systems. According to Nik Kasabov, director of the Knowledge Engineering and Discovery Research Institute at AUT University, neuro-information processing has real-life applications in medicine, cybersecurity, and intelligent robots. Researchers from the Okinawa Institute of Science and Technology in Japan showed off neuro-genetic robots. "Robots are now not only based on fixed rules about how to behave, they now have genes, similar to human genes, which affect their behavior, development and learning," said Kasabov. And researchers from the German Honda Research Institutes discussed the co-evolution of the brain and body in robots, and robots that can change their shape were also on display. "They can evolve, in a similar way as [humans] evolve," said Kasabov.

View Full Article

Thursday, November 27, 2008

Blog: Srizbi Bots Seek Alternate Command-and-Control Servers; believed to generate half of all worldwide spam

SANS NewsBites Vol. 10 Num. 94 (fwd)

Tue, 2 Dec 2008

--Srizbi Bots Seek Alternate Command-and-Control Servers (November 26 & 27, 2008)

The Srizbi botnet, which was disabled by the shutdown of web hosting company McColo several weeks ago, appeared to be back online early last week. Srizbi includes an algorithm that attempts to establish new domain names that the malware could contact for instructions should the initial connection be severed. The botnet suffered another setback when the Estonian Internet service provider (ISP) that had hosted its command and control servers for a short period of time also cut off service to those servers. Srizbi is estimated to comprise more than 450,000 PCs, and it is believed that half of all spam generated worldwide comes through the Srizbi botnet. The reason Srizbi was kept at bay for several weeks was that researchers reverse engineered the Srizbi code and figured out what domains the bots would be searching for, then created and seized them so the bot masters could not regain control of the army of infected machines.

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9121758&source=rss_topic17

http://www.theregister.co.uk/2008/11/26/srizbi_returns_from_dead/

[Editor's Note (Pesactore): The bot client strategies for finding command and control centers has gotten increasingly devious. New techniques used mechanisms that are very similar to old style spycraft, the cyber equivalent of spy numbers stations and chalk Xs on mailboxes.

The needed security breakthrough here is being able to tell automated actions from user-driven actions from the network, rather than relying on blocking communications to command and control centers. ]

Tuesday, November 18, 2008

Blog: An Algorithm With No Secrets; need for a new security hash function

An Algorithm With No Secrets
Technology Review (11/18/08) Naone, Erica

The National Institute of Standards and Technology (NIST) is organizing a competition to find an algorithm to replace the Secure Hash Algorithm 2 (SHA-2), which is becoming outdated. NIST plans to release a short list of the best entries by the end of November, the beginning of a four-year-long process to find the overall winner. In 2005, Tsinghua University Center for Advanced Study professor Xiaoyun Wang found weaknesses in several related hashing algorithms, and since then Wang and others have found faults in several other hashing schemes, causing officials to worry that SHA-2 also may eventually be found to be vulnerable. A hash algorithm creates a digital fingerprint for messages that keep them secure during transit, but it is only considered secure if there is no practical way of running it backward and finding the original message from the fingerprint. There also cannot be a way of producing two messages with the same exact fingerprint. The weaknesses discovered by Wang and others relate to this problem, which cryptographers call a collision. It is impossible to completely avoid collisions, but the best algorithms make collisions extremely hard to produce. "Hash functions are the most widely used and the most poorly understood cryptographic primitives," says BT Counterpane's Bruce Schneier. "It's possible that everything gets broken here, simply because we don't really understand how hash functions work." NIST already has received 64 entries and is counting on cryptographers to narrow the list.

View Full Article

Monday, November 17, 2008

Blog: UT Trainees Tackle Health Information Technology Issues; health IT can cause a loss in efficiency and an increase in medical errors

UT Trainees Tackle Health Information Technology Issues
University of Texas Health Science Center at Houston (11/17/08) Cahill, Rob

University of Texas School of Health Information Sciences at Houston (SHIS) has received a $1.3 million grant from the Agency for Healthcare Research and Quality to support research by six trainees in health information technology. SHIS principal investigator Todd R. Johnson says a huge effort is underway to use health information technology to improve health care, but notes that current technology is not designed to efficiently support the needs of clinicians. As a result, Johnson says there have been numerous cases where the introduction of health IT causes a loss in efficiency and an increase in medical errors. A report by the Institute of Medicine estimates that medical errors cost the United States approximately $38 billion a year and that as many as 98,000 people die in hospitals each year as a result of a medical error. Training grant co-director Eric Thomas says that most medical errors are partially because of information overload. The trainees are working on projects that will increase patient safety. One trainee is focusing on how to design information communications systems so physicians do not miss abnormal test result notifications, while another trainee is looking to improve emergency room decision making. Another trainee is using radio identification technology to track the movements of both emergency room personnel and supplies, and another is using information from simulation studies to develop an IT solution to improve the effectiveness and timeliness of clinical decisions in emergency rooms.

View Full Article

Blog: 'Six Degrees of Kevin Bacon' Game Provides Clue to Efficiency of Complex Networks

'Six Degrees of Kevin Bacon' Game Provides Clue to Efficiency of Complex Networks
University of California, San Diego (11/17/08) Froelich, Warren; Zverina, Jan

The small-world paradigm, discovered by sociologist Stanley Milgram in the 1960s and popularized by the "Six Degrees of Kevin Bacon" game, has become a source of inspiration for researchers studying the Internet as a global complex network. A study published in Nature Physics reveals a previously unknown mathematical model called "hidden metric space," which could explain the small-world phenomenon and its relationship to both man-made and natural networks such as human language, as well as gene regulation or neural networks that connect neurons to organs and muscles within our bodies. The concept of an underlying hidden space also may be of interest to researchers working to remove mounting bottlenecks within the Internet that threaten the smooth passage of digital information. "Internet experts are worried that the existing Internet routing architecture may not sustain even another decade," says Dmitri Krioukov, a researcher at the Cooperative Association for Internet Data Analysis (CAIDA), based at the University of California, San Diego (UCSD). CAIDA director and UCSD professor Kimberly Claffy says the discovery of a metric space hidden beneath the Internet could point toward architectural innovations that would remove this bottleneck, and Krioukov says the reconstruction of hidden metric spaces underlying a variety of real complex networks could have other practical applications. For example, hidden spaces in social or communications networks could lead to new, efficient strategies for searching for specific content or individuals.

View Full Article

Blog: Burned Once, Intel Prepares New Chip Fortified by Constant Tests (formal methods)

Burned Once, Intel Prepares New Chip Fortified by Constant Tests
The New York Times (11/17/08) P. B3; Markoff, John

Despite rigorous stress testing on dozens of computers, Intel's John Barton is still nervous about the upcoming release of Intel's new Core i7 microprocessor. Even after months of testing, Barton knows that it is impossible to predict exactly how the chip will function once it is installed in thousands of computers running tens of thousands of programs. The new chip, which has 731 million transistors, was designed for use in desktop computers, but the company hopes that it will eventually be used in everything from powerful servers to laptops. The design and testing of an advanced microprocessor is one of the most complex endeavors humans have undertaken. Intel now spends $500 million annually to test its chips before selling them. However, it still is impossible to test more than a fraction of the total number of "states" that the new Core i7 chip can be programmed in. "Now we are hitting systemic complexity," says Synopsys CEO Aart de Geus. "Things that came from different angles that used to be independent have become interdependent." In an effort to produce error-free chips, Intel in the 1990s turned to a group of mathematical theoreticians in the computer science field who had developed advanced techniques for evaluating hardware and software, known as formal methods. In another effort to minimize chip errors, the Core i7 also contains software that can be changed after the microprocessors are shipped, giving Intel the ability to correct flaws after the product's release.

View Full Article

Blog: 11 "Laws of IT Physics"

November 17th, 2008

11 “Laws of IT Physics”

Posted by Michael Krigsman @ 6:02 am

Given high rates of failed IT projects, it’s helpful to examine first principles that underlie successful technology deployments. Even though devils live in the details, understanding basic dynamics behind successful project setup, execution, and management is important.

During testimony before Congress, in a hearing titled “The Dismal State of Federal Information Technology Planning,” Norm V. Brown, Executive Director of the Center for Program Transformation, an IT failures think tank, presented what he calls “Laws of IT Physics™.”

These “Laws” highlight hidden pitfalls the hurt many projects, and help explain why some projects succeed while others fail. They recognize that successful IT project delivery is primarily about managing people, process, and deliverables. Yes, technology is important, but the people side comes first.

Thursday, November 13, 2008

Blog: Working at 99% CPU utilization

November 13th, 2008

Working at 99% CPU utilization

Posted by Paul Murphy @ 12:15 am

The key metric used in data processing management is system utilization - and that same metric has been adopted to push the idea that consolidation through virtualization makes sense.

The key factor driving the evolution of system utilization as a management metric was the high cost of equipment - first in the 1920s when data processing was getting started, and again in the 1960s when the transition from electro-mechanical to digital equipment got under way.

What made it possible, however was something else: the fact that batch processing of after the fact data was the natural norm for the gear meant that user management expected printed reports to arrive on their desks on a predictable schedule wholly divorced from the processes generating the data.

Thus what drove data processing management to use its gear 24 x 7 was money - but what made it possible to do this was user management’s acceptance of the time gaps implicit in an overall process consisting of distinct data collection, data processing, and information reporting phases.

Almost a hundred years later data processing still operates in much the same way - and the utilization metric is still their most fundamental internal performance measure. And, since their costs are still high too, the financial incentive to high utilization continues to provide justificatory power.

Wednesday, November 12, 2008

Blog: Computer Science Outside the Box; research project requirements

Computer Science Outside the Box
Computing Community Consortium (11/12/08) Lazowska, Ed

Computing Community Consortium (CCC) chairman Ed Lazowska attended the "Computer Science Outside the Box" workshop hosted by several organizations, including the National Science Foundation and the CCC. One of the insights he took away from the event was that smart decisions can be aided by a model of computer science's evolution depicted as an ever-expanding sphere. "Even when working inside the sphere, we've got to be looking outward," Lazowska writes. "And at the edges of the sphere, we've got to be embracing others, because that's how we reinvent ourselves." Lazowska observes that most computer science work is driven both by usage concerns and a longing to evolve precepts of lasting value, and he contends that researchers may be ascribing too little worth to research without obvious utility and may also be too hesitant to spurn work that concentrates on applications where it may not be apparent that their own field will move forward. This assumption applies to both the interfaces between computer science and other disciplines as well as the intersections between sub-disciplines of computer science. "We've got to produce students who are comfortable at these interfaces," Lazowska says. A research project must be challenging enough to sustain the participant's interest, yet easy enough to be achievable, he says, and the project must possess a long-term vision and be undertaken in increments.

View Full Article

Tuesday, November 11, 2008

Blog: Why Veins Could Replace Fingerprints and Retinas as Most Secure Form of ID

Why Veins Could Replace Fingerprints and Retinas as Most Secure Form of ID
Times Online (UK) (11/11/08) Harvey, Mike

Finger vein authentication is starting to gain traction in Europe. Easydentic Group in France says it will use finger vein security for door access systems in the United Kingdom and other European markets. The advanced biometric system, which verifies identities based on the unique patterns of veins inside the finger, has been widely introduced by Japanese banks in thousands of cash machines over the last two years. Hitachi developed the technology, which captures the pattern of blood vessels by transmitting near-infrared light at different angles through the finger, and then turns it into a digital code to match it against preregistered profiles. Veins are difficult to forge and impossible to manipulate because they are inside the body, according to Hitachi. The company also says finger vein technology is more affordable than iris scanning or face/voice recognition and has a lower false rejection rate than fingerprinting. Finger vein authentication is primarily used in Japan for ATMs, door access systems, and computer log-in systems.

View Full Article

Monday, November 10, 2008

Blog: Study Shows How Spammers Cash In

Study Shows How Spammers Cash In
BBC News (11/10/08)

Researchers at the University of California, Berkeley and the University of California, San Diego (UCSD) hijacked a working spam network to analyze its economic value. The analysis found that spammers can make money by getting just one response for every 12.5 million emails sent. However, the researchers say that spam networks may be susceptible to attacks that make it more costly to send junk email. The researchers, led by UCSD professor Stefan Savage, took over a piece of the Storm spam network and created several proxy bots that acted as conduits of information between the command and control system for Storm and the hijacked home PCs that send the junk mail. The researchers used the machines to conduct their own fake spam campaigns. Two types of fake spam were sent, one that mimicked the way Storm spreads using viruses and the other aimed at tempting people to visit a fake pharmacy site and buy an herbal remedy to boost their libido. The fake pharmacy site always returned an error message when potential buyers clicked a button to submit their credit card details. The researchers sent about 469 million spam messages, and after 26 days and almost 350 million email messages, only 28 "sales" were made. The response rate for the campaign was less than 0.00001 percent, and would have resulted in revenues of $2,731.88, just over $100 a day for the measurement period. The researchers say that spam's small profit margin indicates that spammers would be economically susceptible to any disruptions in their networks.

View Full Article

Saturday, November 8, 2008

Blog: Computers checking mathematical proofs?

November 8th, 2008

Computers checking mathematical proofs?

Posted by Roland Piquepaille @ 8:55 am

Computer-assisted of mathematical proofs are not new. For example, computers were used to confirm the so-called ‘four color theorem.’ In a short release, ‘Proof by computer,’ the American Mathematical Society (AMS) reports it has published a special issue of one its journals dedicated to computer-aided proofs. ‘The four Notices articles explore the current state of the art of formal proof and provide practical guidance for using computer proof assistants.’ And as said one of the researchers involved, ’such a collection of proofs would be akin to the sequencing of the mathematical genome.’ But read more…

Let’s first look at how mathematical theorems were proven in the past. “When mathematicians prove theorems in the traditional way, they present the argument in narrative form. They assume previous results, they gloss over details they think other experts will understand, they take shortcuts to make the presentation less tedious, they appeal to intuition, etc. The correctness of the arguments is determined by the scrutiny of other mathematicians, in informal discussions, in lectures, or in journals. It is sobering to realize that the means by which mathematical results are verified is essentially a social process and is thus fallible.”

This is why the concept of ‘formal proof’ is now being used. “To get around these problems, computer scientists and mathematicians began to develop the field of formal proof. A formal proof is one in which every logical inference has been checked all the way back to the fundamental axioms of mathematics. Mathematicians do not usually write formal proofs because such proofs are so long and cumbersome that it would be impossible to have them checked by human mathematicians. But now one can get ‘computer proof assistants’ to do the checking. In recent years, computer proof assistants have become powerful enough to handle difficult proofs.”

Here is a link to this special issue of the Notices of the American Mathematical Society (December 2008, Volume 55, Issue 11).

Thursday, November 6, 2008

Blog: Proof by Computer

Proof by Computer
EurekAlert (11/06/08)

New developments in the use of formal proof in mathematics are investigated by a series of articles by leading experts, published in the December 2008 edition of the Notices of the American Mathematical Society. The traditional method of proving theorems involves mathematicians presenting the argument in narrative form, and the correctness of the arguments is decided by the analysis of other mathematicians, in informal discussions, in lectures, or in journals. Problems of reliability inevitably crop up because the process of verifying mathematical results is basically social and therefore prone to error, and so the field of formal proof was developed to circumvent these shortcomings. A formal proof is one in which every single logical presumption has been checked all the way back to the basic mathematical precepts, and technology has advanced in recent years so that computer proof assistants that perform such checking are sufficiently powerful to accommodate challenging proofs. If the use of these assistants proliferates, the practice of mathematics could be dramatically revised. One vision is to have formal proofs of all central mathematical proofs, which Thomas Hales of the University of Pittsburgh compares to "the sequencing of the mathematical genome." The quartet of articles analyzes the current state of the art of formal proof and offers practical guidance for employing computer proof assistants.

View Full Article

Wednesday, November 5, 2008

Blog: Yahoo's Hadoop Software Transforming the Way Data Is Analyzed

Yahoo's Hadoop Software Transforming the Way Data Is Analyzed
SiliconValley.com (11/05/08) Ackerman, Elise

Yahoo!'s Hadoop open source data-mining program is capable of searching through the entire Library of Congress in less than 30 seconds. Universities also are using Hadoop, which is part of Yahoo!'s huge computing grid. "It makes it possible to actually take advantage of all the computers we have hooked up together," says Yahoo!'s Larry Heck. Hadoop improves the relevance of ads Yahoo! displays on the Internet by analyzing the company's endless flow of data, which is now more than 10 terabytes a day, in real time. As users navigate through Yahoo!, Hadoop determines which ads are likely to catch their attention. Yahoo! also will be using Hadoop on the sites owned by the 796 members of a newspaper consortium that is working with Yahoo! to sell more advertising at better prices. Hadoop was first used to build Yahoo!'s Web index. Since then, the software has been adjusted by engineers and researchers both inside and outside of the company for use in experiments with giant data sets. Amazon, Facebook, and Intel developers are using Hadoop for tasks such as log analysis to modeling earthquakes. "We are leveraging not only the contribution that we are giving to the software, but the contributions from the larger community as well, everybody wins from it," Heck says.

View Full Article

Tuesday, November 4, 2008

Blog: Multicore: New Chips Mean New Challenges for Developers

Multicore: New Chips Mean New Challenges for Developers
IDG News Service (11/04/08) Krill, Paul

The development of multicore processors is forcing software developers to work on getting software to be processed across multiple cores to fully utilize the new performance capabilities available in the hardware. However, this task has proven to be quite challenging, with developers struggling with issues surrounding concurrency and potential performance bottlenecks. Already, 71 percent of organizations are developing multithreaded applications for multicore hardware, according to a recent IDC survey. IDC analyst Melinda Ballou says developers need to approach multicore with a level of commitment to better practices throughout an organization and from a project perspective. Multicore processors are becoming increasingly common as single-core chips reach their limits and as power-consumption issues become more important. As hardware continues to change, the pressure will be on software developers to adapt and capitalize on the new capabilities. Developers must learn new techniques and use new tools to maximize performance. Intel's James Reinders acknowledges that developing multicore applications requires a much more complicated thought process about software design than most developers understand. "By and large, the majority of programmers don't have experience with it and are in need of tools and training and so forth to help take advantage of it," Reinders says. Intel is offering its Threading Building Blocks template library to help C++ programmers with parallel programming. The Intel Thread Checker helps find nondeterministic programming errors, and the Intel Thread Profiler helps visualize a program to check what each core is doing.

View Full Article

Monday, November 3, 2008

Blog: Microsoft Security Intelligence Report for First Half of 2008 (November 3, 2008)

SANS NewsBites Vol. 10 Num. 87 (fwd)

Tue, 4 Nov 2008

--Microsoft Security Intelligence Report for First Half of 2008 (November 3, 2008)

According to Microsoft's most recent semi-annual Security Intelligence Report, while machines running Windows Vista are less likely to be infected with malware than their Windows XP counterparts, ActiveX browser plug-ins still pose a threat to the newer operating system.

During the first six months of 2008, for each thousand times Microsoft's Malicious Software Removal Tool (MSRT) was executed, it scrubbed malware from three Vista SP1 machines, 10 Windows XP SP2 machines and eight Windows XP SP3 machines. Of the top 10 browser-based attacks against Vista during that same period, eight were ActiveX vulnerabilities. The report also found that 90 percent of disclosed vulnerabilities were in applications, while just six percent were in operating systems.

http://www.computerworld.com/action/article.do?command=viewArticleBasic&articleId=9118879&source=rss_topic17

http://news.cnet.com/8301-1009_3-10080428-83.html?part=rss&subj=news&tag=2547-1009_3-0-20

[Editor's Note (Pescatore): There are far more applications than there are operating systems, so that last bit is not very surprising. The most meaningful data in this report is the chart that shows what types of installed malware the MSRT found and removed. It shows that Trojans and "potentially unwanted software" are getting through desktop defenses pretty easily - the signature and patch-centric approach to protecting desktops isn't dealing with the new, targeted threats that aim at the user, not unpatched PCs.]

Blog: Profile: Luis von Ahn; human-assisted computation

Profile: Luis von Ahn
BusinessWeek (11/03/08) Scanlon, Jessie

Carnegie Mellon University computer science professor Luis von Ahn has developed digitization software that could put the New York Times' entire archive, which dates back to 1851, online by late 2009. The newspaper has been using typists to digitize its archive, and in 10 years they have been able to digitize 27 years of articles. Von Ahn's software will process 129 years in less than 24 months. Von Ahn's research focuses on what he calls "human computation." He develops Web-based programs that take advantage of human abilities, such as reading or knowing common-sense facts, and then aggregating that knowledge to solve large-scale, ongoing problems in computer science. Von Ahn's first breakthrough technology, the Captcha, developed with von Ahn's thesis advisor Manuel Blum in 2000, is used by 60,000 Web sites to verify that the entity filling out a Web registration form is in fact a human. However, von Ahn released ReCaptcha, an updated version of the technology that replaced Captcha's random letters with words from library archives, and now the New York Times' archive, that computers were not able to read, helping complete digitization projects. Another example of human computation is von Ahn's ESP Game, in which two players are shown the same image and asked to type in descriptive labels. When the labels match, the players are awarded points and shown another image. The game helps generate tags for online images.

View Full Article

Friday, October 31, 2008

Blog: HIPAA Security Rule; new implementation guide

SANS NewsBites Vol. 10 Num. 86 (fwd)

Fri, 31 Oct 2008

STANDARDS

--NIST Releases Documents on Key Management, Security in System Development Life Cycle and HIPAA Rule Implementation (October 27, 2008) The National Institute of Standards and Technology (NIST) has released three documents. Special Publication 800-57, "Recommendation for Key Management Part 3: Application Specific Key Management Guidance," is a draft document aimed at helping "system administrators and system installers adequately secure applications based on product availability and organizational needs and to support organizational decisions about future procurements." Comments on the draft document will be accepted through January 16, 2009. Special Publication 800-64, "Security Considerations in the System Development Life Cycle," is a document in its final form that "has been developed to assist federal government agencies in integrating essential IT security steps into their established IT system development life cycle." Special Publication 800-66, "An Introductory Resource Guide for Implementing the Health Insurance Portability and Accountability Act (HIPAA) Security Rule,"

also in its final form.

http://www.gcn.com/online/vol1_no1/47450-1.html?topic=security

http://csrc.nist.gov/publications/drafts/800-57-part3/Draft_SP800-57-Part3_Recommendationforkeymanagement.pdf

http://csrc.nist.gov/publications/nistpubs/800-64-Rev2/SP800-64-Revision2.pdf

http://csrc.nist.gov/publications/nistpubs/800-66-Rev1/SP-800-66-Revision1.pdf

Wednesday, October 29, 2008

Blog: European Computer Scientists Seek New Framework for Computation

European Computer Scientists Seek New Framework for Computation
European Science Foundation (10/29/08)

One of the challenges still remaining for electronic computation is the ability to break down large complex processes into small, more manageable components that can be reused in different applications. This goal can be accomplished in a variety of ways, but none of them can manage all the processes very well. The major problem is that the dependent links, or correlations, that interconnect computer processes or programs cannot be broken down. These dependent links are common to all processes in which computation is involved, including biological systems, quantum computing, and conventional programming. European computer scientists believe that now is the time to create a coordinated effort to solve the correlation problem, and the European Science Foundation recently held a workshop to establish a framework for additional research. The workshop identified that correlations in computer science represent an important problem common to the entire field of programming and concluded that the evolution of general purpose computing has reached a point where the correlation problem will hinder additional progress. The workshop discussed progress in the relatively new field of aspect-oriented software development (AOSD), which is creating new techniques for isolating the correlations bridging software components. AOSD techniques make it possible to modularize those aspects of a system or process that cut across different components, enabling them to be broken down into reusable components or objects.

View Full Article

Blog: Ozzie responds: Is Microsoft Azure just 'Hailstorm' revisited?

Ozzie responds: Is Microsoft Azure just ‘Hailstorm’ revisited?

Posted by Mary Jo Foley

At the Professional Developer Conference (PDC) in Los Angeles, I’ve heard a few long-time Microsoft watchers wondering aloud whether Microsoft’s newly unveiled “Azure” isn’t simply Microsoft taking another run at “Hailstorm.”

I had a chance to ask Ray Ozzie, Microsoft’s Chief Software Architect, that very question this week.

First, a quick refresher: For those who weren’t following the Microsoft juggernaut back in the late 1990s, Hailstorm was Microsoft’s first pass at a .Net services platform. “HailStorm” technologies will enable a new world of computing where all the applications, devices and services in an individual’s life can work together, on their behalf and under their control,” explained Microsoft in a 2001 press release. (Sounds eerily like Live Mesh/Live Services, doesn’t it?)

Microsoft ended up killing off Hailstorm before it ever really launched. One of the main reasons was privacy: Microsoft customers were nervous about trusting Microsoft with hosting their data. And the idea of an on-premise, customer-managed Hailstorm cloud was not fleshed out.

Isn’t Azure — Microsoft’s new cloud platform, of which Live Services are one key component — just Hailstorm Take 2? And if it’s not, how is it really different, I asked Ozzie.

“It’s amazing that at this point in time, as compared to that
long ago, (that) we still don’t have that one nailed from a privacy and
ownership perspective. That was what so many people complained about. But right
now you’ve got Open Social and Facebook Connect, and both of them want to still
create walled gardens, open walled gardens, whatever that is, but that they are
lending you your information back and withdrawing it within 24 hours or
whatever.”

“I think we need to get past that, and what we’re
trying to do with Mesh and the terms of use. We’re trying to get to a point
where you literally do own your data, we bring the personal of the personal
computer to the cloud where it’s your stuff, and if you do something with
someone, it better be a symmetrical synch relationship where you’re giving them
rights, they’re giving you rights, because I just don’t see how it works. We
can’t create a walled garden; it’s just not going to work.”

Blog: In Chaotic Computing, Anarchy Rules OK

In Chaotic Computing, Anarchy Rules OK
New Scientist (10/29/08) No. 2860, P. 40; Graham-Rowe, Duncan

Building next-generation computer processors by tapping the electronic parallel of chaotic weather systems is the goal of a team of physicists in the United States and India led by William Ditto of the University of Florida in Gainesville. Such processors would be vastly more powerful than their conventional chip equivalents, as well as self-reparable, through their ability to channel all their computational muscle into the task at hand and then reassign it as soon as a different chore comes up. The unpredictability of chaotic systems is the result of their sensitivity to the most infinitesimal influences, which inspired Ditto and Sudeshna Sinha of the Institute of Mathematical Sciences in Chennai to consider the construction of a circuit that exhibited chaotic behavior that could be harnessed for practical applications. Ditto and Sinha conceived of a chaotic logic gate with two inputs and one output like a conventional gate, but composed of a chaotic element or chaogate. When the chaogate receives its input signals, the internal chaotic circuit starts oscillating and quickly stabilizes at a value that relies on the inputs and a control signal. The research team calculated that changing the control signal's setting would enable the chaogate to be transformed into any desirable logic gate, and a prototype chaogate proved the concept's feasibility. Ditto is currently engaged in the commercialization of the technology and the fabrication of prototype circuits, and one of the promised advantages of chaotic logic is the dramatically reduced cost of producing custom chips. If a chip containing chaotic logic gates suffers damage, performance need not be affected as the circuits can be reconfigured to bypass the damaged area. Ditto's team has developed a method to use "chameleon" logic circuits to store data, producing digital memory that offers greater compactness than conventional memory and that also can retrieve data faster.

View Full Article

Tuesday, October 28, 2008

Blog: Enterprise readiness of Cloud ratcheting up

Enterprise readiness of Cloud ratcheting up

Posted by James Staten

It may just be time for enterprise customers to take a serious look at cloud computing. Major announcements in the past few days from Microsoft and Amazon have certainly signaled that the on-demand Internet computing model has staying power. And with a long recession looming there may be no better time to start getting familiar with something that could dramatically lower infrastructure costs.Amazon, which has been the dominant market leader and pioneer of cloud computing, finally lifted the “beta” tag from the Elastic Compute Cloud (EC2) and delivered an SLA for the service and support for Windows applications. It also announced plans to provide service monitoring, load balancing and automatic scaling services in the future. And Amazon’s even starting taking phone calls and providing premium support for enterprise customers. Nearly all of these capabilities have been available for months from smaller cloud players (especially those coming from an ISP background where such capabilities are commonplace).

Microsoft countered by signalling that cloud computing has such significant staying power that they are willing to bet the “Windows” brand on it. Ray Ozzie’s Windows Azure goes beyond the basic infrastructure and services of EC2 providing Visual Studio.Net developers with the promise of a complete platform for their works. This will put Microsoft in competition with EC2 as well as Salesforce.com’s Force.com platform. But Azure is just a technical preview today (aka “beta”).

Monday, October 27, 2008

Blog: Good Code, Bad Computations: A Computer Security Gray Area

Good Code, Bad Computations: A Computer Security Gray Area
UCSD News (10/27/08) Kane, Daniel

University of California, San Diego (UCSD) graduate students Erik Buchanan and Ryan Roemer, building on previous research by UCSD professor Hovav Shacham, have demonstrated that the technique of building malicious programs from good code using return-oriented programming can be automated. They also demonstrated that this vulnerability applies to RISC computer architectures as well as the x86 architecture. Shacham has already described how return-oriented programming could be used to force computers with the x86 architecture to act maliciously without infecting the machines with new code. However, the attack required extensive manual construction and appeared to rely on a unique quirk in the x86 design. Buchanan and Roemer will present their work at ACM's Conference on Communications and Computer Security (CCS), which takes place Oct. 27-31 in Alexandria, Virginia. "Most computer security defenses are based on the notion that preventing the introduction of malicious code is sufficient to protect a computer," says UCSD professor Stefan Savage. "There is a subtle fallacy in the logic, however: simply keeping out bad code is not sufficient to keep out bad computation." Return-oriented programming starts with the attacker taking advantage of a programming error in the target system to overwrite the runtime stack and divert program execution away from the path intended by the system's designers. However, instead of injecting malicious code, this technique enables attackers to create any kind of malicious computation or program using existing code.

View Full Article

Blog: Microsoft's Azure cloud platform: A guide for the perplexed

Microsoft’s Azure cloud platform: A guide for the perplexed

Posted by Mary Jo Foley

Now that the initial Microsoft PDC pixie dust has settled, developers are trying to digest exactly what Microsoft’s cloud platform is. Here’s my attempt to explain it.

Microsoft layed out its “Azure” foundational infrastructure for the cloud during the keynote kick-off on day one of the Professional Developers Conference (PDC) here in Los Angeles. The goal of Azure is to provide developers who want to write applications that run partially and/or entirely in a remote datacenter with a platform and set of tools.

Thursday, October 23, 2008

Blog: Computer Circuit Built From Brain Cells

Computer Circuit Built From Brain Cells
New Scientist (10/23/08) Barras, Colin

Researchers at Israel's Weizmann Institute of Science have developed a way to control the growth pattern of human neurons to build reliable computer circuits that use neurons instead of wires. The researchers start with a glass plate coated with cell-repellent material. The desired circuit pattern is scratched into this coating and then coated with a cell-friendly adhesive. The cell repellent forces the cells to grow in the scratched areas, which are thin enough to force the neurons to grow in a single direction, forming straight, wire-like connections around the circuit. Using this method, the researchers built a device that acts like an AND logic gate, which produces an output only when it receives two inputs. Weizmann researcher Assaf Rotem believes that this research provides a useful model for real brain function, and says that brain-cell logic circuits could serve as intermediaries between computers and the nervous system. Brain implants already give paralyzed people the ability to control robotic arms or the ability to talk, but these implants suffer a drop-off in performance when scar tissue covers the electrodes. "An intermediate layer of in vitro neurons interfacing between man and machine could be advantageous," Rotem says.

View Full Article

Friday, October 17, 2008

Blog: Computing With RNA

Computing With RNA
Technology Review (10/17/08) Graham-Rowe
California Institute of Technology (Caltech) researchers Christina Smolke and Maung Nyan Win have created molecular computers that can self-assemble from strips of RNA within living cells. The Weizmann Institute of Science's Ehud Shapiro says the research creates the possibility of computing devices capable of responding to specific conditions within a cell, and could lead to drug delivery systems that target cancer cells by sensing genes used to regulate cell growth and death. Smolke and Win's biocomputers are built using three main components--sensors, actuators, and transmitters--all made from RNA. The input sensors are made from RNA molecules that act like antibodies, binding tightly to specific targets. The actuators are made of ribozymes, complex RNA molecules that have catalytic properties similar to enzymes. These two components are combined with another RNA molecule that serves as a transmitter. It is activated when a sensor molecule recognizes an input chemical and triggers an actuator molecule. By combining RNA molecules in certain ways, the researchers demonstrated that they can get them to behave like different types of logic gates. Smolke says the modular molecules have a plug-and-play like capability, which allows them to be combined in different ways and could potentially be used to detect thousands of different metabolic or protein inputs.

View Full Article

Wednesday, October 15, 2008

Blog: Probe Sees Unused Internet

Probe Sees Unused Internet
Technology Review (10/15/08) Lemos, Robert
Internet addresses may not be running out as quickly as many feared, concludes a new research study. The study found that millions of Internet addresses have been assigned but remain unused. In a paper to be presented at the ACM Internet Measurement Conference, which takes place October 20-22, in Vouliagmeni, Greece, six researchers have documented what they say is the first complete census of the Internet in more than two decades. The researchers discovered a surprising number of unused addresses and predict that plenty of addresses will still be unused when the last numbers are assigned in a few years. The researchers say the main problem is that some companies and institutions are using only a small portion of the millions of addresses they have been allocated. The paper's lead author, University of Southern California professor John Heidemann, says the study indicates that there might be better ways of managing the IPv4 address space. A new map of the Internet created by the study suggests that there is room for more hosts even if addresses are running out. The map found that roughly a quarter of all blocks of network addresses are still unused. IPv4 offers about 4.3 billion addresses, while IPv6, the next-generation Internet address scheme, will offer 51 thousand trillion trillion addresses.

View Full Article

Tuesday, October 14, 2008

Blog: Study: Use of Ruby Language on the Rise

Study: Use of Ruby Language on the Rise
eWeek (10/14/08) Taft, Darryl K.

The use of the Ruby programming language has grown significantly over the past four years, according to a study based on Black Duck Software's Koders.com search engine data. Ruby is now used more widely than PHP, Python, and Perl, and nearly as much as Visual Basic, C/C++, and C#. Black Duck says Ruby is now the fourth most requested language on Koders.com, behind Java, C/C+, and C#, and the number of Ruby searches has increased more than 20 fold since 2004. "Black Duck's search data confirms the tremendous growth that we are seeing within the community of Ruby developers," says RubyForge.org system administrator Tom Copeland. "It's great to see a leading code search site like Koders.com index RubyForge because it represents another way to make the projects in our community available to tens of thousands of developers worldwide." Ruby, used along with the Ruby on Rails framework, will reach 4 million developers worldwide by 2013, says Gartner's Mark Driver. "Ruby will enjoy a higher concentration among corporate IT developers than typical, dynamic 'scripting' languages, such as PHP," Driver says. Black Duck acquired Koders.com in April and has since enhanced the code search service by adding more than 200 million lines of code to the search engine's repository, increasing its size by 33 percent.

View Full Article

Monday, October 13, 2008

Blog: Dynamic Programming Futures

Dynamic Programming Futures
IDG News Service (10/13/08) Wayner, Peter

Dynamic programming languages such as Ruby on Rails, JavaScript, Perl, and Python have achieved sufficient critical mass to succeed and thrive in the future, but experts say the nature of one's business and the structure of one's data are more important considerations than coolness when it comes to choosing a language platform. The future evolution of scripting languages will be guided by 10 principles, including the reduced importance of semantic barriers as the languages scramble to pinch good concepts off each other, the growing dominance of frameworks, and the rising value of communities. Another factor shaping scripting languages is the evolution of applications into their own worlds, while the Web and the cloud are emerging as the conclusive platform. Improved language technology will lead to significant performance gains, and the life of dynamic code will be extended by emulation and cross-compilation. Another key principle is the penetration of programming into Web applications through embedding, while the relevance of dynamic programming could be greatly reduced by the advent of amateur programmers. Finally, a critical factor is adaptability for modern architectures. Any one of the emerging scripting languages may be appropriate as long as they track and navigate these 10 principles.

View Full Article

Blog: UK University Holds Artificial Intelligence Test

UK University Holds Artificial Intelligence Test
Associated Press (10/13/08) Satter, Raphael G.

The University of Reading recently conducted its annual Turing Test of artificial intelligence. Dozens of volunteers at split-screen terminals carried out two conversations simultaneously, one with a chat program and one with a human. After five minutes, the volunteers were asked to identify the human and the machine. The chatbot Elbot was declared the winner for fooling three out of the 12 judges assigned to evaluate the program's conversational skills, earning the Loebner Artificial Intelligence Prize's bronze medal. The contest is based on the ideas of British mathematician Alan Turing, who in 1950 argued that conversation was proof of intelligence, and if a computer talked like a human, then for all practical purposes it thought like a human. Each of the programs approached the Turing Test in slightly different ways. One program often referenced its native Odessa and "Aunt Sonya in America." Another used humor to try to fool the judges. Elbot tried to throw the judges off by humorously admitting it was a machine, saying it accidentally poured milk on its cereal instead of oil, and by trying to dominate the conversation to keep it from wandering into areas it was not properly programmed to handle. Elbot's bronze medal is awarded to the software that best mimics human conversation in text form. So far, no silver or gold medals have been awarded. A silver medal would go to a machine that could pass a longer version of the Turing Test and fool at least half the judges, and a gold medal would be awarded to a machine that could process audio and visual information in addition to text.

View Full Article

Friday, October 10, 2008

Blog: Academics Sink Teeth Into Yahoo Search Service

Academics Sink Teeth Into Yahoo Search Service
CNet (10/10/08) Shankland, Stephen
Academics and startups can construct their own search sites around Yahoo's search engine at no charge and manipulate results as they see fit through Yahoo's Build Your Own Search Service (BOSS), and the venture could give Yahoo potentially higher standing in a market where Google reigns supreme. BOSS can be used to modify search results, as illustrated by an application used by Chengxiang Zhai and Bin Tan of the University of Illinois at Urbana-Champaign. Their application directed Yahoo's search engine along specific paths based on the data stored on the user's own computer to deduce which of several items that shared the same name a user was more likely to be searching for. "We believe the client side of personalization ... can alleviate concern over privacy and it can provide more information about user activity," Zhai says. "And it can naturally distribute computation" so a search company's machines share work with the user's own system. Another service of potentially substantial value to academics is Yahoo's search assist feature, which suggests searches based on what people have started to type into the search box. For instance, it can display the variations of a search term, its membership in diverse categories, and the probability that people are searching for the term by itself or as part of a bigger query. "That's got a lot of potential," says Stanford University natural-language processing Ph.D. candidate Dan Ramage.

View Full Article

Tuesday, October 7, 2008

Blog: Researchers Show How to Crack Popular Smart Cards

Researchers Show How to Crack Popular Smart Cards
InfoWorld (10/07/08) de Winter, Brenno
Researchers at the Dutch Radboud University Nijmegen have published a cryptographic algorithm and source code that could be used to duplicate smart cards used by several major transit systems. The scientists presented their findings at the Esorics security conference in Malaga, Spain, and also published an article with cryptographic details. The research demonstrated how to circumvent the security mechanism of NXP Semiconductor's Mifare Classic RFID cards, which are widely used to provide access control to buildings and public transportation. The researchers exposed the workings of the chip by analyzing communication between the chip and the reader. A RFID-compatible device, the Ghost, was designed to work independently from a computer, which allowed the researchers to obtain the cryptographic protocol. Part of the vulnerability comes from the fact that the RFID reader has to communicate in a predictable way. Once the mechanism was exposed, the scientists were able to crack keys in less than a second using an industry standard computer with only 8MB of memory. The researchers also examined another chip, the Hitag2, to crack Mifare. Information on a Hitag2 hack is freely available online, which helped the researchers crack Mifare. Another effort by German researcher Henryk Plotz cracked the Mifare Classic by removing a Mifare chip from a card and removing layers, photographing each layer under a microscope and analyzing all the connections.

View Full Article

Sunday, October 5, 2008

Blog: 'Intelligent' Computers Put to the Test

'Intelligent' Computers Put to the Test
Guardian Unlimited (UK) (10/05/08) Smith, David

Fifty years after mathematician Alan Turing questioned whether machines are capable of thinking, six programs will carry on a conversation with human interrogators in an experiment that will attempt to prove the answer is yes. To pass the Turing test, the software must trick the judges into believing they are talking to a human. So far, no program has passed the test, but six programs will soon answer questions posed by human volunteers at the University of Reading in an effort to do so. If any of the programs succeed, it will likely be considered the most significant advancement in artificial intelligence since the IBM supercomputer Deep Blue defeated world chess champion Garry Kasparov in 1997. The achievement could also raise profound questions surrounding whether a computer has the potential to be conscious and if humans have the right to turn such a computer off. University of Reading cyberneticist Kevin Warwick believes that machines are conscious, but in a different way, much like how a bat or a rat is conscious, but different from humans. "I think the reason Alan Turing set this game up was that maybe to him consciousness was not that important; it's more the appearance of it, and this test is an important aspect of appearance," Warwick says.
Click Here to View Full Article

Thursday, October 2, 2008

Blog: NIST Issues Three IT Security Documents; SP 800-115, Technical Guide to Information Security Testing and Assessment

SANS New Bites: Vol. 10, Nu. 78; 10/03/2008

--NIST Issues Three IT Security Documents (October 2, 2008) The US National Institute of Standards and Technology (NIST) has released three documents that offer guidance on issues of information security. SP 800-121, Guide to Bluetooth Security, provides recommendations for securing implementations of Bluetooth technology. SP 800-115, Technical Guide to Information Security Testing and Assessment, offers guidance for designing and conducting security tests, analyzing the data generated by those tests, and implementing solutions to detected problems. Both documents are in final form.

SP 800-82, Guide to Industrial Control Systems (ICS) Security, is a draft document providing recommendations for securing Supervisory Control and Data Acquisition (SCADA) systems, Distributed Control Systems (DCS) and other system configurations. Public comment on this document will be accepted through November 30, 2008.

http://www.gcn.com/online/vol1_no1/47273-1.html?topic=security

http://csrc.nist.gov/publications/nistpubs/800-121/SP800-121.pdf

http://csrc.nist.gov/publications/nistpubs/800-115/SP800-115.pdf

http://csrc.nist.gov/publications/drafts/800-82/draft_sp800-82-fpd.pdf

Blog Archive