Wednesday, December 16, 2009

Blog: Less Clumsy Code for the Cloud

Less Clumsy Code for the Cloud
Technology Review (12/16/09) Naone, Erica

Researchers at the University of California, Berkeley are working on a project called BOOM, which is developing new programming techniques for cloud computing. BOOM researchers hope to make cloud computing more efficient by using database programming techniques originally developed in the 1980s, which are designed to collect large data sets and process them in various ways. "We can't keep programming computers the way we are," says Berkeley professor Joseph Hellerstein. "People don't have an easy way to write programs that take advantage of the fact that they could rent 100 machines at Amazon." Bloom researchers are adapting an old language called Datalog to develop Bloom, a new language that would provide an easier way for programmers to work with cloud computing resources. The group also is creating a Bloom library that can be used with popular languages such as Java and Python. Oxford University professor Georg Gottlob, a Datalog expert, says the language may have been ahead of its time, but is gaining in popularity with the rise of distributed computing applications.

View Full Article

Thursday, December 10, 2009

Blog: Researchers Make Significant Advances in Molecular Computing

Researchers Make Significant Advances in Molecular Computing
University of Kent (12/10/09)

The fundamental limits of molecular computing have been defined by researchers at the University of Kent, which published their findings in the Journal of the Royal Society Interface. The research also discusses how fast molecular computers can perform a computation, which must be addressed in order to design machines that use components of organisms to run calculations inside living cells. The metabolic rate or the ability to process energy would determine the speed of bio-molecular computers, says Kent's Dominique Chu. "One of our main findings is that a molecular computer has to balance a trade-off between the speed with which a computation is performed and the accuracy of the result," Chu says. "However, a molecular computer can increase both the speed and reliability of a computation by increasing the energy it invests in the computation." He says this energy could be derived from food sources. Moreover, he believes the findings have the potential to be of practical importance for computing in general.

View Full Article

Tuesday, December 8, 2009

Blog: Optimism as Artificial Intelligence Pioneers Reunite

Optimism as Artificial Intelligence Pioneers Reunite
The New York Times (12/08/09) P. D4; Markoff, John

An optimistic outlook has returned to the field of artificial intelligence (AI) 45 years after the pronouncement by computer scientist John McCarthy that a thinking machine could be created within a decade. Fueling the renewed optimism is rapid progress in AI technologies. More than 200 of the Stanford Artificial Intelligence Laboratory's (SAIL's) original scientists recently convened for a reunion, where the optimism was palpable. On hand were such luminaries as Don Knuth, who wrote the definitive texts on computer programming, and spell-checker designer Les Earnest. Other SAIL alumni included Raj Reddy and Hans Moravec, who made important foundational contributions to speech recognition and robotics at Carnegie Mellon University. The development of the graphical user interface was based on the philosophy of simplicity defined by SAIL veteran Larry Tesler, while McCarthy, who was SAIL's director, developed the LISP programming language and the time-sharing approach to computers prior to joining the laboratory. The strides that AI has made in recent years is especially apparent at Stanford, where a team of researchers developed an autonomous vehicle that successfully traversed 131 miles of mountain roads to win the 2005 Grand Challenge held by the U.S. Defense Advanced Research Projects Agency. "We are a first-class citizen right now with some of the strongest recent advances in the field," says current SAIL director and Stanford roboticist Sebastian Thrun.

View Full Article

Monday, December 7, 2009

Blog: Rethinking Artificial Intelligence

Rethinking Artificial Intelligence
MIT News (12/07/09) Chandler, David L.

The Massachusetts Institute of Technology (MIT) is embarking on the Mind Machine Project (MMP), an initiative led by artificial intelligence (AI) pioneers to create new breakthroughs by rethinking fundamental AI assumptions. "Essentially, we want to rewind to 30 years ago and revisit some ideas that had gotten frozen" while fixing basic mistakes made over the years, says MIT professor Neil Gershenfeld. He says the MMP aims to specifically address the three biggest quagmires in AI research--the modeling of thought, the reliable simulation of memory, and bridging the gap between computer science and physical science. Tackling the first challenge entails establishing what Gershenfeld calls "an ecology of models" so that problem-solving can be facilitated in multiple ways. Addressing the memory issue involves teaching computers to learn to reason while incorporating rather than excluding inconsistency and ambiguity. The third AI research area requires a new programming approach called reconfigurable asynchronous logic automata, whose goal is to "re-implement all of computer science on a base that looks like physics," representing computations "in a way that has physical units of time and space, so the description of the system aligns with the system it represents," Gershenfeld says. One of the projects the MMP group is developing is a brain co-processor, an assistive system designed to help people with cognitive disorders by monitoring a person's activities and brain functions, determining when he or she requires help, and supplying precisely the right piece of information at the right time.

View Full Article

Thursday, December 3, 2009

Blog: Researchers Build Artificial Immune System to Solve Computational Problems

Researchers Build Artificial Immune System to Solve Computational Problems
PhysOrg.com (12/03/09) Zyga, Lisa

Oklahoma State University (OSU) researchers have published a study on the use of artificial immune systems (AIS) in evolutionary algorithms. By copying the way a living body acquires immunity to disease through vaccination, researchers have designed an AIS to more efficiently solve optimization problems. The results show that the biologically motivated approach is better at exploring a greater amount of space than previous methods. Unlike previous forms of AIS, the OSU system capitalizes on the way that vaccines can improve the performance of the immune system. Vaccines enable immune systems to detect new, weakened antigens and develop a biological memory so they can recognize the same antigen in the future. The researchers drew inspiration from how vaccines work in designing the new AIS. They inject the AIS with certain points in the decision space that act as a weak antigen, or vaccine. When comparing the new algorithm, called Vaccine-AIS, to other types of AIS, the researchers found that Vaccine AIS outperformed the others by locating the optimum point in a plot in fewer evaluations. "AIS was originally designed for data mining, anomaly detection, and the like," says OSU's Gary Yen. "Its use as an optimization tool is a very young research area but its performance is drawing interest from researchers."

View Full Article

Friday, November 27, 2009

Blog: Proper Use of English Could Get a Virus Past Security

Proper Use of English Could Get a Virus Past Security
New Scientist (11/27/09) Blincoe, Robert

Johns Hopkins University security researcher Josh Mason says hackers could potentially evade most existing antivirus programs by hiding malicious code within ordinary text. Mason and colleagues have discovered how to hide malware within English-language sentences. Mason developed a way to search a large set of English text for combinations of words that could be used in malicious code. This potential weakness has been recognized in the past, but many computer security experts believed that the rules of English word and sentence construction would make executing an attack through the English language impossible. Machine code requires the use of character combinations not usually seen in plain text, such as strings of mostly capital letters. University College London security researcher Nicolas Courtis says malicious code hidden in plain language would be "very hard if not impossible to detect reliably." Mason and colleagues presented their research at the recent ACM Conference on Computer and Communications Security, but were careful to omit some of their methodology to avoid helping potential hackers. "I'd be astounded if anyone is using this method maliciously in the real world, due to the amount of engineering it took to pull off," Mason says.

View Full Article

Tuesday, November 24, 2009

Blog: New Standard Lets Browsers Get a Grip on Files

New Standard Lets Browsers Get a Grip on Files
CNet (11/24/09) Shankland, Stephen

The World Wide Web Consortium has published File API, an interface draft that Web browsers can use to better manipulate files and is part of a larger effort to provide a better foundation for interactive applications. File API defines ways browsers and Web sites can improve how they handle files, including selecting multiple files for upload, such as on photo-sharing sites or Web-based email. Other improvements govern the use of "blobs," or packages of raw binary data such as video files. Google has supported blobs for its Gears browser plug-in as a way to separate large videos into smaller pieces so uploads can be more easily resumed if a network problem interrupts the process. A major benefit is that files are handled asynchronously, meaning the browser will not freeze while a file is being uploaded or managed, and the browser reports back on the progress of file transfers. The interface is compatible with several standards, including the drag-and-drop support in HTML5, currently in development, and the Web Workers technology that improves the way browsers perform numerous operations simultaneously. The interface also can help Web applications process and understand the contents of files. For example, the interface could allow for Web applications that automatically search through a music playlist and find the lyrics to the songs on that playlist.

View Full Article

Monday, November 16, 2009

Blog: How Secure Is Cloud Computing?

How Secure Is Cloud Computing?
Technology Review (11/16/09) Talbot, David

The recent ACM Cloud Computing Security Workshop, which took place Nov. 13 in Chicago, was the first event devoted specifically to the security of cloud computing systems. Speaker Whitfield Diffie, a visiting professor at Royal Holloway, University of London, says that although cryptography solutions for cloud computing are still far-off, much can be done in the short term to help make cloud computing more secure. "The effect of the growing dependence on cloud computing is similar to that of our dependence on public transportation, particularly air transportation, which forces us to trust organizations over which we have no control, limits what we can transport, and subjects us to rules and schedules that wouldn't apply if we were flying our own planes," Diffie says. "On the other hand, it is so much more economical that we don't realistically have any alternative." He says current cloud computing techniques negate any economic benefit that would be gained by outsourcing computing tasks. Diffie says a practical near-term solution will require an overall improvement in computer security, including cloud computing providers choosing more secure operating systems and maintaining a careful configuration on the systems. Security-conscious computing services providers would have to provision each user with their own processors, caches, and memory at any given moment, and would clean systems between users, including reloading the operating system and zeroing all memory.

View Full Article

Blog: Supercomputers With 100 Million Cores Coming By 2018

Supercomputers With 100 Million Cores Coming By 2018
Computerworld (11/16/09) Thibodeau, Patrick

A key topic at this week's SC09 supercomputing conference, which takes place Nov. 14-20 in Portland, Ore., is how to reach the exascale plateau in supercomputing performance. "There are serious exascale-class problems that just cannot be solved in any reasonable amount of time with the computers that we have today," says Oak Ridge Leadership Computing Facility project director Buddy Bland. Today's supercomputers are still well short of exascale performance. The world's fastest system, Oak Ridge National Laboratory's Jaguar, reaches a peak performance of 2.3 petaflops. Bland says the U.S. Department of Energy (DOE) is holding workshops on building a system 1,000 times more powerful. The DOE, which is responsible for funding many of the world's fastest systems, wants two machines to reach approximately 10 petaflops by 2011 to 2013, says Bland. However, the next major milestone currently receiving the most attention is the exaflop, or a million trillion calculations per second. Exaflop computing is expected to be achieved around 2018, according to predictions largely based on Moore's Law. However, problems involved in reaching exaflop computing are far more complicated than advancements in chips. For example, Jaguar uses 7 megawatts of power, but an exascale system that uses CPU processing cores alone could take 2 gigawatts, says IBM's Dave Turek. "That's roughly the size of medium-sized nuclear power plant," he says. "That's an untenable proposition for the future." Finding a way to reduce power consumption is key to developing an exascale computer. Turek says future systems also will have to use less memory per core and will require greater memory bandwidth.

View Full Article

Thursday, November 12, 2009

Blog: Intel Says Shape-Shifting Robots Closer to Reality

Intel Says Shape-Shifting Robots Closer to Reality
Computerworld (11/12/09) Gaudin, Sharon

Researchers at Intel and Carnegie Mellon University (CMU) say distributed computing and robotics could be used to make shape-shifting electronics a reality in the not-too-distant future. The researchers are working to take millions of millimeter-sized robots and enable them to use software and electromagnetic forces to change into a variety of shapes and sizes. CMU professor Seth Goldstein and Intel researcher Jason Campbell recently reported that they are able to demonstrate the physics needed to create programmable matter. "It's been pretty hard but we've made a lot of progress," Campbell says. "Optimistically, we could see this in three to five years." Programmable matter is called claytronics, and the millimeter-sized robots are called catoms. Each catom would contain its own processor, and would essentially be a tiny robot or computer with computational power, memory, and the ability to store and share power. The goal is to program millions of catoms to work together by developing software that focuses on a pattern or overall movement of the entire system of tiny robots. Each robot will be smart enough to detect its own place in the pattern and respond accordingly. Part of the research effort involves developing new programming languages, algorithms, and debugging tools to get these systems to work together.

View Full Article

Wednesday, November 11, 2009

Blog: CIO Blast From the Past: 40 Years of Multics, 1969-2009

CIO Blast From the Past: 40 Years of Multics, 1969-2009
CIO Australia (11/11/09) Gedda, Rodney

Four decades ago, Multiplexed Information and Computing Service (Multics), widely considered the basis of contemporary time-sharing systems, was first employed for information management at the Massachusetts Institute of Technology (MIT). MIT professor and ACM 1990 A.M. Turing Award winner Fernando J. Corbato led MIT's Multics project. He says the implementation of Multics was driven by the need for "a higher-level language to program the bulk of the system to amplify the effectiveness of each programmer." Corbato says that "Multics was designed to be a general-purpose, time-sharing system so the focus was less on the novelty of the applications and more on the ease of developing and building applications and systems." He counts the Unix programming language to be Multics' most significant legacy, noting that both Multics and Unix exploited their hardware effectively. Among the features used in modern computing that Corbato lists as being first developed or thought up with Multics are hierarchical file systems, file access controls, and dynamic linking on demand. "The real legacy of Multics was the education and inculcation of system engineering principles in over 1,400 people directly associated with operating, maintaining, extending, and managing the system during its lifetime," he says. "Because we made documentation and publications a mainstay of the project, countless others have also been influenced."

View Full Article

Tuesday, November 10, 2009

Blog: Google Launches New Programming Language: Go

Google Launches New Programming Language: Go
eWeek (11/10/09) Taft, Daryl K.

Google has unveiled Go, a new programming language the company says offers the speed of working in a dynamic language such as Python and the performance and safety of a compiled language such as C or C++. "Go is a great language for systems programming with support for multi-processing, a fresh and lightweight take on object-oriented design, plus some cool features like true closures and reflection," according to the Google Go team in a blog post. However, Google is not using the experimental language internally for production systems. Instead, Google is conducting experiments with Go as a candidate server environment. "The Go project was conceived to make it easier to write the kind of servers and other software Google uses internally, but the implementation isn't quite mature enough yet for large-scale production use," according to the FAQ on the Go language's Web site. With Go, developers should find builds to be spontaneous. Large binaries will compile in just a few seconds, and the code will run close to the speed of C. Go is the second programming environment Google has released this fall. In September, Google released Noop, a Java-like programming language.

View Full Article

Blog: Inventing Language; comments by 2008 Turing Award winner Barbara Liskov

Inventing Language
MIT News (11/10/09) Hardesty, Larry

Massachusetts Institute of Technology (MIT) professor Barbara Liskov, winner of ACM's 2008 A.M. Turing Award, recently delivered the first lecture of MIT's 2009 Dertouzos Lecture Series. Liskov, who received the Turing Award in part for the work she did in the 1970s establishing the principles for the organization of programming languages, began her talk by describing the environment in which she performed her pioneering work. Liskov explained that in the fall of 1972, after reviewing the literature in the field, she developed the idea for what is known now as abstract data types. After developing that idea, Liskov says she and some collaborators created a programming language, CLU, which put most of her ideas into practice. The remainder of Liskov's lecture focused on a demonstration that CLU prefigured many of the ideas common in modern programming languages, such as polymorphism, type hierarchy, and exception handling. During a question and answer session, Liskov said the secret to her success was not working that many hours a day, going home at night, and not working in the evening. "I always found that downtime to be really useful," she said. Liskov also stressed the importance of working on interesting research, instead of research that is most likely to get published.

View Full Article

Monday, November 9, 2009

Blog: Web Security Tool Copies Apps' Moves; "Ripley," developed by Microsoft Research

Web Security Tool Copies Apps' Moves
Technology Review (11/09/09) Mims, Christopher

Microsoft researchers have developed Ripley, a way to secure Web applications by cloning the user's browser and running the application remotely. Ripley, announced at ACM's Computer and Communications Security Conference, which takes place Nov. 9-13 in Chicago, prevents a remote hacker or malicious user from changing the behavior of code running inside a Web browser by creating an exact copy of the computational environment and running that copy on the server. Ripley also relays all of the user's actions, including mouse clicks, keystrokes, and other inputs, from the client to the server as a compressed event stream. The behavior of the clone code is compared to the behavior of the application running on the user's browser. If any discrepancies occur, Ripley disconnects the client. "You cannot trust anything that happens in the client," says Ripley lead developer Ben Livshits. "It's basically the devil in the browser from the developer's point of view." Livshits says Ripley is completely invisible to the end user and will not affect the normal function of a Web application. Ripley can even enhance the performance of Web applications, because the clone program is written in .Net, which is 10 to 100 times faster than the JavaScript used on the client side. University of California, Berkeley researcher Adam Barth says Ripley is part of a larger trend to protect the integrity of client-side programs. "The work suggests that security would benefit if we validated more than we're validating today," Barth says.

View Full Article

Blog: What Computer Science Can Teach Economics

What Computer Science Can Teach Economics
MIT News (11/09/09) Hardesty, Larry

Professor Constantinos Daskalakis in the Massachusetts Institute of Technology's Computer Science and Artificial Intelligence Laboratory is applying the theory of computational complexity to game theory. He argues that some common game-theoretical problems are so challenging that solving them would take the lifetime of the universe, and thus they fail to accurately represent what occurs in the real world. In game theory a "game" represents any mathematical model that associates different player strategies with different results. Daskalakis' doctoral thesis disputes the assumption that finding the Nash equilibrium for every game will allow the system's behavior to be accurately modeled. In the case of economics, the system being modeled is the market. Daskalakis' thesis illustrates that for some games, the Nash equilibrium is so difficult to calculate that all the world's computing resources could never find it in the universe's lifetime. In the real world, market rivals tend to calculate the strategies that will maximize their own outcomes given the current state of play, rather than work out the Nash equilibria for their specific games and then adopt the resulting tactics. However, if one player changes strategies, the other players will change strategies in response, driving the first player to shift strategies again, and so on until the feedback pathways eventually converge toward equilibrium. Daskalakis contends that feedback will not find the equilibrium faster than computers could calculate it.

View Full Article

Tuesday, November 3, 2009

Blog: Is AES Encryption Crackable?

Is AES Encryption Crackable?
TechNewsWorld (11/03/09) Germain, Jack M.

The Advanced Encryption Standard (AES) system was long believed to be invulnerable to attack, but a group of researchers recently demonstrated that there may be an inherent flaw in AES, at least theoretically. The study was conducted by the University of Luxembourg's Alex Biryukov and Dmitry Khovratovich, France's Orr Dunkelman, Hebrew University's Nathan Keller, and the Weizmann Institute's Adi Shamir. In their report, "Key Recovery Attacks of Practical Complexity on AES Variants With Up to 10 Rounds," the researchers challenged the structural integrity of the AES protocol. The researchers suggest that AES may not be invulnerable and raise the question of how far is AES from becoming insecure. "The findings discussed in [in the report] are academic in nature and do not threaten the security of systems today," says AppRiver's Fred Touchette. "But because most people depend on the encryption standard to keep sensitive information secure, the findings are nonetheless significant." AirPatrol CEO Ozzie Diaz believes that wireless systems will be the most vulnerable because many investments in network media are wireless, and there is no physical barrier to entry. Diaz says that exposing the vulnerability of the AES system could lead to innovations for filling those gaps. Touchette says that AES cryptography is not broken, and notes that the latest attack techniques on AES-192 and AES-256 are impractical outside of a theoretical setting.

View Full Article

Monday, November 2, 2009

Blog: First Test for Election Cryptography

First Test for Election Cryptography
Technology Review (11/02/09) Naone, Erica

An election in Tacoma Park, Md., held this November will be the first to use Scantegrity, a new vote-counting system that uses cryptography to ensure that votes are cast and recorded accurately. Scantegrity's inventors say the system could eliminate the need for recounts and provide better assurance that an election was conducted properly. Scantegrity allows voters to check online to ensure their votes were counted correctly, and officials and independent auditors can check to make sure ballots were tallied properly without seeing how an individual voted. Scantegrity developer David Chaum says the system uses a familiar paper ballot, which requires that voters fill in the bubble next to the name of their preferred candidate. The ballot is then fed into a machine that scans it and secretly records the result. The difference from other systems is that a special type of ink and pen are used, and when the voter fills in a bubble on the ballot a previously invisible secret code appears. The voter can record the code or codes and check them online later. If the code appears in the online database, the ballot was counted correctly. Every ballot has its own randomly assigned codes, which prevents the process from revealing which candidates a voter selected. Auditors can ensure all votes were counted correctly by comparing a list of codes corresponding to votes and a list of the results. University of Maryland, Baltimore County professor Alan Sherman says Scantegrity is fundamentally better than other systems in regards to integrity, and makes it possible to audit elections with much greater accuracy and certainty.

View Full Article

Tuesday, October 27, 2009

Blog: Testing the Accessibility of Web 2.0

Testing the Accessibility of Web 2.0
University of Southampton (ECS) (10/27/09) Lewis, Joyce

The University of Southampton's School of Electronics and Computer Science (ECS) is launching a study that will explore how well people with disabilities can access Web services such as blogs, wikis, and social networking sites. The study, led by Mike Wald and E.A. Draffan in ECS' Learning Societies Lab, is based on an accessibility toolkit that will enable users to test the accessibility of Web 2.0 services. The accessibility tools were developed as a result of the LexDis project, which identified strategies learners can use to enhance their e-learning experience. Part of the toolkit, Web2Access, provides an online checking system for any interactive Web-based service such as Facebook. Another key feature of the accessibility kit is Study Bar, which can work with all browsers and reads text out loud, spell checks, provides a dictionary, and can enlarge or change text fonts and colors to make text more readable. "We developed it because nowadays users contribute as well as read information and so you cannot just click on a button to see if Web sites are accessible and easy to use," Draffan says. Wald says it is the first time that there has been a systematic way to evaluate and provide the results of accessibility testing of Web services.

View Full Article

Thursday, October 8, 2009

Blog: Prizes Aside, the P-NP Puzzler Has Consequences

Prizes Aside, the P-NP Puzzler Has Consequences
New York Times (10/08/09) Markoff, John

The frenzy of downloading that accompanied the September cover article in the Communications of the ACM when it was issued online reflects the intense interest in the subject of the article--the progress, or lack thereof, on solving the P vs. NP challenge. P represents the class of problems that can be solved in polynomial time, while NP represents the class of problems that can be confirmed in polynomial time. It is theorized that if P equals NP, some of the most complex real-world computing problems, such as optimizing the layout of transistors on a computer chip, could be addressed, triggering an acceleration of technological and economic productivity. Cracking computer codes raises a similar challenge. Such tasks share the common characteristic that an increase in the size of a problem is accompanied by an exponential increase in the computer time needed to solve it. Progress on meeting the P vs. NP challenge has been slow, but Lance Fortnow of the McCormick School of Engineering at Northwestern University believes the theory might be proved or disproved through algebraic geometry.

View Full Article

Friday, September 25, 2009

Blog: Code Breakthrough Delivers Safer Computing

Code Breakthrough Delivers Safer Computing
University of New South Wales (09/25/09) Trute, Peter

Computer researchers at the University of New South Wales and NICTA say they have proven that an operating-system kernel was 100 percent free of bugs. The team verified the kernel known as the seL4 microkernel by mathematically proving the correctness of about 7,500 lines of computer code in a project taking an average of six people more than five years. "What we've shown is that it's possible to make the lowest level, the most critical, and in a way the most dangerous part of the system, provably fault free," says NICTA researcher Gernot Heiser. The research could potentially improve the security and reliability of critical systems used by the medical and airline industries as well as the military. "The verification provides conclusive evidence that bug-free software is possible, and in the future, nothing less should be considered acceptable where critical assets are at stake," Heiser says.

View Full Article

Monday, September 21, 2009

Blog: Ants Versus Worms: Computer Security Mimics Nature

Ants Versus Worms: Computer Security Mimics Nature
Wake Forest University (09/21/09) Frazier, Eric

Pacific Northwest National Laboratory (PNNL) researcher Glenn Fink is working with Wake Forest University professor Errin Fulp to develop a computer security program that models itself after the defensive techniques of ants. The new anti-malware system uses itinerant digital ants to find problems in a large network. When an ant comes across something suspicious, it leaves behind a "scent trail" that draws an army of ants to the problem. The large group attracts the attention of computer users to a possible invasion. "Our idea is to deploy 3,000 different types of digital ants, each looking for evidence of a threat," Fulp says. Rather than equipping all digital ants with the same memory-heavy defenses, the program apportions certain threats to specific digital ants. The digital ants report to a "sentinel" located at each computer, which in turn is supervised by a "sergeant" of the network. All sergeants are monitored and controlled by human users. To test the program, the researchers sent a computer worm into the system and the digital ants were able to corner the worm. PNNL has given the researchers more time to study the program. If successful, the researchers say the program would be ideal for universities, government agencies, and corporations that rely on large networks.

View Full Article

Wednesday, September 16, 2009

Blog: Rethinking the Long Tail Theory: How to Define 'Hits' and 'Niches'

Rethinking the Long Tail Theory: How to Define 'Hits' and 'Niches'
Knowledge@Wharton (09/16/09)

The Long Tail theory suggests that the Internet drives demand away from popular products with mass appeal and directs it to more obscure niche offerings as it eases distribution and uses cutting-edge recommendation systems, but this theory is being challenged by new Wharton School research. A paper by Wharton professor Serguei Netessine and doctoral student Tom F. Tan details their use of data from the Netflix movie rental company to investigate consumer demand for blockbusters and lesser-known films. The researchers have determined that, contrary to the Long Tail effect, mass appeal products retain their high rankings when expanding product variety and consumer demand is factored in. "There are entire companies based on the premise of the Long Tail effect that argue they will make money focusing on niche markets," Netessine says. "Our findings show it's very rare in business that everything is so black and white. In most situations, the answer is, 'It depends.' The presence of the Long Tail effect might be less universal than one may be led to believe." The number of rated film titles at Netflix climbed from 4,470 to 17,768 between 2000 and 2005, and if this diversity is factored in so that product popularity is estimated relative to the total product variety, Wharton researchers do not uncover any proof of the Long Tail effect. Netessine says a relative analysis yields more meaning when applying the Long Tail theory to companies because it accounts for costs involved in maintaining a supply chain to meet demand for many niche products.

View Full Article

Monday, August 24, 2009

Blog: Bing, Wolfram Alpha agree on licensing deal

Bing, Wolfram Alpha agree on licensing deal

By Tom Krazit CNET News

Posted on ZDNet News: Aug 24, 2009 5:00:23 AM


Microsoft's Bing search engine and Wolfram Alpha have reached a licensing deal that allows Bing to present some of the specialized scientific and computational content that Wolfram Alpha generates, according to a source familiar with the deal.

Representatives from Microsoft and Wolfram Research declined to comment on the deal.

Wolfram Alpha's unique blend of computational input and curated output has not taken the world by storm, but it is considered an interesting enough take on the business of internet search to attract high-profile attention within the industry.

Wolfram Alpha does not return the usual list of links to pages with search keywords, instead providing answers to questions such as stock prices and complex mathematical formulas — with mixed results.

Bing, on the other hand, is enjoying a solid start in the three months since it made its debut as it gains users, and it will at some point be the default search experience on Yahoo's highly trafficked pages following a long-awaited deal.

It is not clear whether Bing results will carry Wolfram's branding (that is, results 'Powered By Wolfram Alpha'), but there will be some sort of presence.


Thursday, August 20, 2009

Blog: Millionths of a Second Can Cost Millions of Dollars: A New Way to Track Network Delays

Millionths of a Second Can Cost Millions of Dollars: A New Way to Track Network Delays
University of California, San Diego (08/20/09) Kane, Daniel

Researchers at the University of California, San Diego and Purdue University have developed the Lossy Difference Aggregator, a method for diagnosing delays in data center networks in as little as tens of millionths of seconds. Delays in data center networks can result in multimillion dollar losses for investment banks that run automatic stock-trading systems, and similar delays can hold up parallel processing in high performance cluster computing applications. The Lossy Difference Aggregator can diagnose fine-grained delays in as little as tens of microseconds, and packet loss as infrequent as one in a million at every router within a data center network. The researchers say their method could be used to modernize router designs with almost no cost in terms of router hardware and no performance penalty. The performance of the routers within the data centers that run automated stock-trading systems are large and difficult to monitor. Delays in these routers, called latencies, are what can add microseconds to a network's time and potentially cost millions of dollars. The traditional way of measuring latency is to track when a packet arrives and leaves a router. However, instead of tracking every packet, the new system randomly splits incoming packets into groups and adds up arrival and departure times for each group. As long as the number of losses is smaller than the number of groups, at least one group will provide an accurate estimate. Subtracting the two sums, from groups that have no loss, and dividing by the number of messages, provides the estimated average delay. By implementing this system on every router, a data center manager could quickly identify slow routers. The research was presented at the recent ACM SIGCOMM 2009 conference. Purdue University professor Ramana Kompella says the next step will be to build the hardware implementation.

Tuesday, August 18, 2009

Blog: Desktop Multiprocessing: Not So Fast

Desktop Multiprocessing: Not So Fast
Computerworld (08/18/09) Wood, Lamont

The continuance of Moore's Law--the axiom that the number of devices that can be economically installed on a processor chip doubles every other year--will mainly result in a growing population of cores, but the exploitation of those cores by the software requires extensive rewriting. "We have to reinvent computing, and get away from the fundamental premises we inherited from [John] von Neumann," says Microsoft technical fellow Burton Smith. "He assumed one instruction would be executed at a time, and we are no longer even maintaining the appearance of one instruction at a time." Although vendors offer the possibility of higher performance by adding more cores to the central processing unit, the achievement of this operates on the assumption that the software is aware of those cores, and will use them to run code segments in parallel. However, Amdahl's Law dictates that the anticipated improvement from parallelization is 1 divided by the percentage of the task that cannot be parallelized combined with the improved run time of the parallelized segment. "It says that the serial portion of a computation limits the total speedup you can get through parallelization," says Adobe Systems' Russell Williams. Consultant Jim Turley maintains that overall consumer operating systems "don't do anything very smart" with multiple cores, and he points out that the ideal tool--a compiler that takes older source code and distributes it across multiple cores--remains elusive. The public's adjustment to multicore exhibits faster progress than application vendors, with hardware vendors saying that today's buyers are counting cores rather than gigahertz.

View Full Article

Blog: A-Z of Programming Languages: Scala

A-Z of Programming Languages: Scala
Computerworld Australia (08/18/09) McConnachie, Dahna

The Scala programming language, which runs on the Java Virtual Machine, could become the preferred language of the modern Web 2.0 startup, according to a Twitter developer. Scala creator Martin Odersky says the name Scala "means scalable language in the sense that you can start very small but take it a long way." He says he developed the language out of a desire to integrate functional and object-oriented programming. This combination brings together functional programming's ability to build interesting things out of simple elements and object-oriented programming's ability to organize a system's components and to extend or adapt complex systems. "The challenge was to combine the two so that it would not feel like two languages working side by side but would be combined into one single language," Odersky says. The challenge lay in identifying constructs from the functional programming side with constructs from the object-oriented programming side, he says. Odersky lists the creation of the compiler technology as a particularly formidable challenge he faced in Scala's development. He notes that support of interoperability entailed mapping everything from Java to Scala, while another goal of the Scala developers was making the language fun to use. "This is a very powerful tool that we give to developers, but it has two sides," Odersky says. "It gives them a lot of freedom but with that comes the responsibility to avoid misuse."

View Full Article

Blog: FTC Rule Expands Health Data Breach Notification Responsibility to

FTC Rule Expands Health Data Breach Notification Responsibility to Web-Based Entities

SANS NewsBites Vol. 11 Num. 66 (August 18, 2009)

The US Federal Trade Commission has issued a final rule on health care breach notification. The rule will require web-based businesses that store or manage health care information to notify customers in the event of a data security breach. Such entities are often not bound by the requirements of the Health Insurance Portability and Accountability Act (HIPAA); this rule addresses that discrepancy.

http://www.darkreading.com/security/government/showArticle.jhtml?articleID=219400484

[Editor's Note (Pescatore): If my kids grow up to be government agencies, I hope they turn out to be the FTC. Any government agency is my kind of government agency when they issues press releases with headlines like "FTC Says Mortgage Broker Broke Data Security Laws: Dumpster Wrong Place for Consumers' Personal Information."]

Monday, August 17, 2009

Blog: International Win for Clever Dataminer; Weka data-mining software

International Win for Clever Dataminer; Weka data-mining software
University of Waikato (08/17/09)

The first place finisher in the 2009 Student Data Mining Contest, run by the University of California, San Diego, used the Weka data-mining software to predict anomalies in e-commerce transaction data. Quan Sun, a University of Waikato computer science student, says it took about a month to find the answer. The contest drew more than 300 entries from students in North America, Europe, Asia, and Australasia. "I couldn't have done it without Weka," Sun says of the open source software that was developed at Waikato. "Weka is like the Microsoft Word of data-mining, and at least half of the competitors used it in their entries." ACM's Special Interest Group on Knowledge Discovery and Data Mining gave the Weka software its Data Mining and Knowledge Discovery Service Award in 2005. Weka has more than 1.5 million users worldwide.

View Full Article

Wednesday, August 12, 2009

Blog: Safer Software

Safer Software
The Engineer Online (08/12/09)

Researchers at Australia's Information and Communications Technology Research Centre of Excellence (NICTA) have developed the Secure Embedded L4 (seL4) microkernel, which they say is the world's first formal machine-checked proof of a general-purpose operating system kernel. The researchers say the seL4 microkernel provides the ability to mathematically prove that software governing critical safety and security systems in aircraft and motor vehicles is free of large class errors. The microkernel has potential applications in military, security, and other industries in which the flawless operation of complex embedded systems is critical. "Proving the correctness of 7,500 lines of C code in an operating system's kernel is a unique achievement, which should eventually lead to software that meets currently unimaginable standards of reliability," says Cambridge University Computer Laboratory professor Lawrence Paulson. NICTA principle researcher Gerwin Klein says the researchers have created a general, functional correctness proof, which he says is unprecedented for real-world, high-performance software of such a large size and complexity. The NICTA team invented new techniques in formal machine-checked proofs, made advancements in the mathematical understanding of real-world programming languages, and developed new methodologies for rapid prototyping of operating system kernels. "The project has yielded not only a verified microkernel but a body of techniques that can be used to develop other verified software," Paulson says. The research will be presented at the 22nd ACM Symposium on Operating Systems Principles, which takes place Oct. 11-14 in Big Sky, Montana.

View Full Article

Tuesday, August 11, 2009

Blog: Twenty Critical Controls ("the CAG") Update

Twenty Critical Controls ("the CAG") Update

SANS NewsBites Vol. 11 Num. 63 (August 11, 2009)

(1) V2.1 To Be Released This Week

On Friday of this week Version 2.1 of the 20 Critical Controls for Effective Cyber Defense will be published at the CSIS site. This update reflects input from more than 100 organizations that reviewed the initial draft and contains the mapping of the 20 Critical Controls to revised NIST 800-53 controls requested by NIST.

(2) Search for Effective Automation Tools Begins This release also signals the launch of the search for tools that automate one or more of the controls. The authors have already received seven submissions from vendors that believe their tools provide effective automation for the implementation and continuous monitoring of several controls. The new search will last until August 31. Any user that has automated elements of the 20 Critical Controls and any vendors that have tools that automate those controls, should send submission to cag@sans.org before August 31. Those that are demonstrated to actually work will be posted and may be included in the first National Summit on Planning and Implementing the 20 Critical Controls to be held at the Reagan Center in November. If you are wondering whether your tools meet the needs, you can find a draft at http://www.sans.org/cag/guidelines.php

(3) A 60 minutes webcast on Thursday, August 13, 1PM - 2PM EDT:

"Three Keys To Understanding and Implementing the Twenty Critical Controls for Improved Security in Federal Agencies" with James Tarala and Eric Cole. For free registration, visit

https://www.sans.org/webcasts/show.php?webcastid=92748

Monday, August 10, 2009

Blog: The A-Z of Programming Languages: Clojure

The A-Z of Programming Languages: Clojure
Computerworld Australia (08/10/09) Edwards, Kathryn

Clojure programming language creator Rick Hickey says Clojure came out of his desire for "a dynamic, expressive, functional language, native on the [Java Virtual Machine/Common Language Runtime (JVM/CLR)]." He says Clojure is designed to support the writing of simple, fast, and robust programs. Hickey says he elected to develop another Lisp dialect rather than extend an existing one because he wanted the language's appeal to reach beyond existing Lisp users, and to support design decisions that would have broken backward compatibility with the existing Scheme and Common Lisp programs. "I originally targeted both the JVM and CLR, but eventually decided I wanted to do twice as much, rather than everything twice," Hickey says. "I chose the JVM because of the much larger open source ecosystem surrounding it, and it has proved to be a good choice." Hickey stresses that solid concurrency support is a key feature of Clojure. "All of the core data structures in Clojure are immutable, so right off the bat you are always working with data that can be freely shared between threads with no locking or other complexity whatsoever, and the core library functions are free of side effects," he says. "But Clojure also recognizes the need to manage values that differ over time."

View Full Article

Thursday, August 6, 2009

Blog: XML Library Flaws Affect Numerous Applications

XML Library Flaws Affect Numerous Applications

SANS NewsBites Vol. 11 Num. 62 (August 6, 2009)

Researchers have uncovered a significant number of flaws in Extensible Markup Language (XML) libraries that could be exploited to crash machines and execute malicious code. The flaws affect large numbers of applications that use the libraries in question. Sun Microsystems, Apache, and Python products are known to be vulnerable.

http://www.securecomputing.net.au/News/152193,researchers-find-largescale-xml-library-flaws.aspx

http://www.theregister.co.uk/2009/08/06/xml_flaws/

http://voices.washingtonpost.com/securityfix/2009/08/researchers_xml_security_flaw.html

[Editor's Note (Northcutt): Uh Oh. This is not good. XML is behind the scenes in almost everything. I wonder whether XML gateways could be used to mitigate the problem to some extent.]

Blog: 5 lessons from the dark side of cloud computing

5 lessons from the dark side of cloud computing

InfoWorld: Robert Lemos CIO.com; August 6, 2009

While many companies are considering moving applications to the cloud, the security of the third-party services still leaves much to be desired, security experts warned attendees at last week's Black Hat Security Conference.

The current economic downturn has made cloud computing a hot issue, with startups and smaller firms rushing to save money using virtual machines on the Internet and larger firms pushing applications such as customer relationship management to the likes of Salesforce.com. Yet companies need to be more wary of the security pitfalls in moving their infrastructure to the cloud, experts say.

Wednesday, August 5, 2009

Blog: Warning Issued on Web Programming Interfaces

Warning Issued on Web Programming Interfaces
Technology Review (08/05/09) Naone, Erica

Application programming interfaces (APIs), software specifications that allow Web sites and services to interact with each other, have been a major factor in the rapid growth of Web applications, but security experts at the DEFCON hacking conference revealed ways of exploiting APIs to attack different sites and services. APIs have been key to the success of many social sites. John Musser, founder of Programmable Web, a Web site for users of mashups and APIs, says that the traffic driven to Twitter through APIs, like from desktop clients, is four to eight times greater than the traffic that comes through Twitter's Web site. However, Nathan Hamiel from Hexagon Security Group and Shawn Moyer from Agura Digital Security say that APIs could be exploited by hackers. The security researchers note that several APIs are often stacked on top of each other. Hamiel says this kind of stacking could led to security problems on several layers, and that APIs can open sites to new kinds of threats. In the presentation, Hamiel demonstrated that an attack might be able to use an API in unintended ways to gain access to parts of a Web site that should not be visible to the public. Hamiel says whenever a site adds functionality it increases its attack surface, and the same thing that makes APIs powerful often makes them vulnerable. Musser says any site that builds an API on top of another site's API is relying on someone else's security, and it is difficult to determine what has been built to see how well it is handled. WhiteHat Security founder and chief technology officer Jeremiah Grossman says sites that publish APIs can find it difficult to discover security flaws in their own APIs, and it is often hard to tell how a third-party site is using an API and if that site has been compromised by an attacker.

View Full Article

Tuesday, August 4, 2009

Blog: New Epidemic Fears: Hackers

New Epidemic Fears: Hackers
The Wall Street Journal (08/04/09) P. A6; Worthen, Ben

Under the economic stimulus bill and other U.S. federal government proposals, hospitals and doctors' offices that invest in electronic records systems may receive compensation from part of a $29 billion fund. However, such systems can be vulnerable to security breaches. Last year health organizations publicly disclosed 97 data breaches, up from 64 in 2007, including lost laptops with patient data on them, misconfigured Web sites that accidentally disclosed confidential information, insider theft, and outside hackers breaking into a network. Because most healthcare organizations keep patients' names, Social Security numbers, dates of birth, and payment information such as insurance and credit cards, criminals often target these places for identity theft. "Healthcare is a treasure trove of personally identifiable information," says Secure Works researcher Don Jackson. The U.S. Federal Trade Commission says medical fraud is involved in about 5 percent of all identity theft. Smaller practices can become easier targets, as they rarely have a technology professional or security specialists, and often lack a security plan or proper tools. The government plans to release guidelines over the next year, as part of the stimulus bill, to illustrate a secure information system, but critics warn that data encryption and other security functions are worthless if they are not correctly used. "If you take a digital system and implement it in a sloppy way, it doesn't matter how good the system is," says World Privacy Forum executive director Pam Dixon. "You're going to introduce risk."

View Full Article

Monday, August 3, 2009

Blog: NIST Issues Final Version of SP 800-53; Enables Rapid Adoption of the Twenty Critical Controls (Consensus Audit Guidelines)

NIST Issues Final Version of SP 800-53; Enables Rapid Adoption of the Twenty Critical Controls (Consensus Audit Guidelines)

SANS NewsBites Vol. 11 Num. 61 (August 3, 2009)

The National Institute of Standards and Technology (NIST) has published the final version of SP 800-53, Revision 3, "Recommended Security Controls for Federal Information Systems and Organizations." The document is the first major revision of guidelines for implementing the Federal Security Management Act (FISMA) since 2005. Among the changes in this updated version are "A simplified, six-step Risk Management Framework; Recommendations for prioritizing security controls during implementation or deployment; and Guidance on using the Risk Management Framework for legacy information systems and for external information system services providers." The new version of 800-53 solves three fatal problems in the old version - calling for common controls (rather than system by system controls), continuous monitoring (rather than periodic certifications), and prioritizing controls (rather than asking IGs to test everything). Those are the three drivers for the 20 Critical Controls (CAG). In at least five agencies, contractors that previously did 800-53 evaluations are being re-assessed on their ability to implement and measure the effectiveness of the 20 Critical Controls in those agencies. One Cabinet-level Department has proven that implementing the 20 Critical Controls with continuous monitoring reduced the overall risk by 84% across all departmental systems world-wide.

http://gcn.com/Articles/2009/08/03/NIST-release-of-800-53-rev-3-080309.aspx

http://csrc.nist.gov/publications/nistpubs/800-53-Rev3/sp800-53-rev3-final.pdf

[Editor's Note (Paller): This is very good news. John Gilligan reports that a new version of the 20 Critical Controls document will be released next week with a table, put in the document at NIST's request, showing how the 20 Critical Controls are a proper subset of the priority one controls in the revised 800-53. A course on implementing and testing the 20 Critical Controls will be run in San Diego next month and in Chicago in October https://rr.sans.org/ns2009/description.php?tid=3467.]

Blog: NCSA Researchers Receive Patent for System that Finds Holes in Knowledge Bases

NCSA Researchers Receive Patent for System that Finds Holes in Knowledge Bases
University of Illinois at Urbana-Champaign (08/03/09) Dixon, Vince

Researchers at the National Center for Supercomputing Applications (NCSA) at the University of Illinois, Urbana-Champaign, have received a patent for a method of determining the completeness of a knowledge base by mapping the corpus and locating weak links and gaps between important concepts. NCSA research programmer Alan Craig and former NCSA staffer Kalev Leetaru were building databases using automatic Web crawling and needed a way of knowing when to stop adding to the collection. "So this is a method to sort of help figure that out and also direct that system to go looking for more specific pieces of information," says Craig. Using any collection of information, the technique graphs the data, analyzes conceptual distances within the graph, and identifies parts of the corpus that are missing important documents. The system then suggests what concepts may best fill those gaps, creating a link between two related concepts that might otherwise not have been found. Leetaru says this system helps users complete knowledge bases with information they are initially unaware of. Leetaru says the applications for this method are limitless, as the corpus does not have to be computer-based and the method can be applied to any situation involving a collection of data that users are not sure is complete.

View Full Article

Blog: Computers Unlock More Secrets of the Mysterious Indus Valley Script

Computers Unlock More Secrets of the Mysterious Indus Valley Script
UW News (08/03/09) Hickey, Hannah

A team of Indian and U.S. researchers, led by University of Washington professor Rajesh Rao, is attempting to decipher the script of the ancient Indus Valley civilization. Some researchers have questioned whether the script's symbols are actually a language, or are instead pictograms of political or religious icons. The researchers are using computers to extract patterns from the ancient Indus symbols. The researchers have uncovered several distinct patterns in the symbols' placement in sequences, which has led to the development of a statistical model for the unknown language. "The statistical model provides insights into the underlying grammatical structure of the Indus script," Rao says. "Such a model can be valuable for decipherment, because any meaning ascribed to a symbol must make sense in the context of other symbols that precede or follow it." Calculations show that the order of the symbols is meaningful, as taking one symbol from a sequence and changing its position creates a new sequence that has a much lower probability of belonging to the language. The researchers say the presence of such distinct rules for sequencing provides support for the theory that the unknown script represents a language. The researchers used a Markov model, a statistical model that estimates the likelihood of a future event, such as inscribing a particular symbol, based on previously observed patterns. One application uses the statistical model to fill in missing symbols on damaged artifacts, which can increase the pool of data available for deciphering the writings.

View Full Article

Friday, July 31, 2009

Blog: Behaviour of Building Block of Nature Could Lead to Computer Revolution

Behaviour of Building Block of Nature Could Lead to Computer Revolution
University of Cambridge (07/31/09)

Physicists from the Universities of Cambridge and Birmingham have demonstrated that electrons in narrow wires can split into two new particles called spinons and holons. Like-charged electrons repel each other and must adjust their movements to avoid getting too close to each other, and this effect is exacerbated in extremely narrow wires. It was theorized in 1981 that under these conditions and at the lowest temperatures the electrons would be permanently divided into spinons and holons. Accomplishing this required confining electrons in a quantum wire that is brought in close enough proximity to an ordinary metal so that the electrons in the metal could "jump" into the wire by quantum tunneling. The Cambridge and Birmingham physicists observed how the electron, on penetrating the quantum wire, split into spinons and holons by watching how the rate of jumping varied with an applied magnetic field. "Quantum wires are widely used to connect up quantum 'dots,' which may in the future form the basis of ... a quantum computer," notes Chris Ford with the University of Cambridge's Cavendish Laboratory. "Thus understanding their properties may be important for such quantum technologies, as well as helping to develop more complete theories of superconductivity and conduction in solids in general. This could lead to a new computer revolution."

View Full Article

Thursday, July 30, 2009

Blog: How Wolfram Alpha Could Change Software

How Wolfram Alpha Could Change Software
InfoWorld (07/30/09) McAllister, Neil

Wolfram Research's Wolfram Alpha software is described as a computational knowledge engine that employs mathematical methods to cross-reference various specialized databases and generate unique results for each query. Furthermore, Wolfram alleges that each page of results returned by the Wolfram Alpha engine is a unique, copyrightable work because its terms of use state that "in many cases the data you are shown never existed before in exactly that way until you asked for it." Works produced by machines are copyrightable, at least in theory. But for Wolfram Alpha to claim copyright protection for its query results, its pages must be such original presentations of information that they are eligible as novel works of authorship. Although Wolfram says its knowledge engine is driven by exclusive, proprietary sources of curated data, many of the data points it works with are actually commonplace facts. If copyright applies to Wolfram Alpha's output in certain instances, then by extension the same rules are relevant to every other information service in similar cases. Assuming that unique presentations based on software-based manipulation of mundane data can be copyrighted, the question remains as to who retains what rights to the resulting works.

View Full Article

Wednesday, July 29, 2009

Blog: A Better Way to Shoot Down Spam

A Better Way to Shoot Down Spam
Technology Review (07/29/09) Kremen, Rachel

The Spatio-temporal Network-level Automatic Reputation Engine (SNARE) is an automated system developed at the Georgia Institute for Technology that can spot spam before it hits the mail server. SNARE scores each incoming email according to new criteria that can be gathered from a single data packet. The researchers say the system puts less pressure on the network and keeps the need for human intervention to a minimum while maintaining the same accuracy as conventional spam filters. Analysis of 25 million emails enabled the Georgia Tech team to compile characteristics that could be culled from a single packet of data and used to efficiently identify spam. They also learned that they could identify junk email by mapping out the geodesic distance between the Internet Protocol (IP) addresses of the sender and receiver, as spam tends to travel farther than legitimate email. The researchers also studied the autonomous server number affiliated with an email. SNARE can spot spam in seven out of 10 instances, with a false positive rate of 0.3 percent. If SNARE is deployed in a corporate environment, the network administrator could establish rules about the disposition of email according to its SNARE score. Northwestern University Ph.D. candidate Dean Malmgren questions the effectiveness of SNARE once its methodology is widely known, as spammers could use a bogus IP address close to the recipient's to fool the system. Likewise, John Levine of the Coalition Against Unsolicited Commercial Email warns that "spammers are not dumb; any time you have a popular scheme [for identifying spam], they'll circumvent it."

View Full Article

Friday, July 24, 2009

Blog: Scale-Free Networks: A Decade and Beyond

Scale-Free Networks: A Decade and Beyond
Science (07/24/09) Vol. 325, No. 5939, P. 412; Barabasi, Albert-Laszlo

Early network models were predicated on the notion that complex systems were randomly interconnected, but research has shown that networks exhibit perceptibly nonrandom features through what is termed the scale-free property. Moreover, this network architecture has demonstrated universality through its manifestation in all kinds of real networks, regardless of their age, scope, and function. All systems seen as complex are comprised of an incredibly large number of elements that interact through intricate networks. The scale-free nature of networks has been proven thanks to improved maps and data sets as well as correspondence between empirical data and analytical models that predict network structure. The existence of the scale-free property validates the indivisibility of the structure and evolution of networks, and has forced researchers to accept the concept that networks are in a constant state of flux due to the arrival of nodes and links. The universality of diverse topological traits, such as motifs, degree distributions, degree correlations, and communities, function as a platform to analyze variegated phenomena and make predictions. By understanding the behavior of systems perceived as complex, researchers can plot out such things as the Internet's response to attacks and cells' reactions to environmental changes. This requires achieving a comprehension of networks' dynamical phenomena.

View Full Article - May Require Free Registration

Tuesday, July 21, 2009

Blog: Moore's Law Hits Economic Limits

Moore's Law Hits Economic Limits
Financial Times (07/21/09) P. 15; Nuttall, Chris

Much attention has been given to the approaching scientific limit to chip miniaturization and the continuation of Moore's Law, but an economic limit is nearing faster than the predicted scientific limit, according to some experts. "The high cost of semiconductor manufacturing equipment is making continued chipmaking advancements too expensive to use for volume production, relegating Moore's Law to the laboratory and altering the fundamental economics of the industry," says iSuppli's Len Jelinek. He predicts that Moore's Law will no longer drive volume chip production after 2014 because circuitry widths will dip to 20 nanometers or below by that date, and the tools to make those circuits will be too expensive for companies to recover their costs over the lifetime of production. The costs and risks associated with building new fabrication systems have already forced many producers of logic chips toward a fabless chip model, in which they outsource much of their production to chip foundries in Asia. At the 90nm level there were 14 chipmakers involved in fabrication, but at the 45nm level that number has been reduced to nine, and only two of them, Intel and Samsung, have plans to create 22nm factories. However, Intel's Andy Bryant says that as long as demand is maintained by consumers and businesses looking for the most advanced technology, Moore's Law, and the major investments it requires, will continue to make economic sense.

View Full Article

Blog: Yale Researchers Create Database-Hadoop Hybrid

Yale Researchers Create Database-Hadoop Hybrid
Computerworld (07/21/09) Lai, Eric

Yale University professor Daniel J. Abadi has led the development of HadoopDB, an open source parallel database management system (DBMS) that combines the data-processing capabilities of a relational database with the scalability of new technologies such as Hadoop and MapReduce. HadoopDB was developed using components from PostgreSQL, the Apache Hadoop data-sorting technology, and Hive, the international Hadoop project launched by Facebook. HadoopDB queries can be submitted as either MapReduce or in SQL language. Abadi says data processing is partially done in Hadoop and partially in "different PostgreSQL instances" spread out over several nodes in a shared-nothing cluster of machines. He says that unlike previously developed DBMS projects, HadoopDB is not a hybrid only at the language/interface level, but also at the systems implementation level. Abadi says HadoopDB combines the best of both approaches to achieve the fault tolerance of massively parallel data infrastructures, such as MapReduce, in which a single server failure has little effect on the overall grid, and is capable of performing complex analyses almost as quickly as existing commercial parallel databases. He says that as databases continue to grow, systems such as HadoopDB will "scale much better than parallel databases."

View Full Article

Monday, July 20, 2009

Blog: How Networking Is Transforming Healthcare

How Networking Is Transforming Healthcare
Network World (07/20/09) Marsan, Carolyn Duffy

High-speed computer networks have the potential to transform the healthcare industry, according to Mike McGill, program director for Internet2's Health Sciences Initiative. Internet2's Health Network Initiative is a project to help medical researchers, educators, and clinicians see the possibilities of network applications in a medical setting. McGill notes, for instance, that Internet2 has demonstrated the ability to put a patient up on a telepresence environment with a remote psychiatrist, which would go a long way toward fulfilling the U.S. Department of Veterans Affairs' mandate to provide care for wounded soldiers, who often reside in rural areas but are assigned psychiatrists based in urban areas. McGill points out that 120 of the 125 medical schools in the United States are Internet2 members, and he says that Internet2 is the designated national backbone for the Federal Communications Commission's Rural Health Care Pilot Program. Groups that the Health Network Initiative has spawned include those focusing on security, the technical aspects, network resources, and education. McGill says the Obama administration is currently pushing "for electronic health records with very limited capability," and notes that the Health Sciences Initiative is "working on electronic health records that are backed up by lab tests and images, and that's a whole lot richer of an environment than just the textual record." Another project the initiative is focused on is the creation of a cancer biomedical informatics grid that is linking all U.S. cancer centers so that the research environment can be unified to exchange data. McGill describes the last mile and cultural challenges as the key networking challenges in terms of electronic health information sharing.

View Full Article

Blog: Can Pen and Paper Help Make Electronic Medical Records Better?

Can Pen and Paper Help Make Electronic Medical Records Better?
IUPUI News Center (07/20/09) Aisen, Cindy Fox

Using pen and paper occasionally can make electronic medical records even more useful to healthcare providers and patients, concludes a new study published in the International Journal of Medical Informatics. The study, "Exploring the Persistence of Paper with the Electronic Health Record," was led by Jason Saleem, a professor in the Purdue School of Engineering and Technology at Indiana University-Purdue University Indianapolis. "Not all uses of paper are bad and some may give us ideas on how to improve the interface between the healthcare provider and the electronic record," Saleem says. In the study of 20 healthcare workers, the researchers found 125 instances of paper use, which were divided into 11 categories. The most common reasons for using paper workarounds were efficiency and ease of use, followed by paper's capabilities as a memory aid and its ability to alert others to new or important information. For example, a good use of paper was the issuing of pink index cards to newly arrived patients at a clinic who had high blood pressure. The information was entered into patients' electronic medical records, but the pink cards allowed physicians to quickly identify a patient's blood pressure status. Noting that electronic systems can alert clinicians reliably and consistently, the study recommends that designers of these systems consider reducing the overall number of alerts so healthcare workers do not ignore them due to information overload.

View Full Article

Blog: Can Computers Decipher a 5,000-Year-Old Language?

Can Computers Decipher a 5,000-Year-Old Language?
Smithsonian.com (07/20/09) Zax, David

One of the greatest mysteries of the ancient world is the meaning of the Indus civilization's language, and University of Washington, Seattle professor Rajesh Rao is attempting to crack the 5,000-year-old script using computational techniques. He and his colleagues postulated that such methods could reveal whether the Indus script did or did not encode language by measuring the degree of randomness in a sequence, also known as conditional entropy. Rao's team employed a computer program to measure the script's conditional entropy, and then measured the conditional entropy of several natural languages, the artificial Fortran computer programming language, and non-linguistic systems such as DNA sequences. Comparison between these various readings found that the Indus script's rate of conditional entropy bore the closest resemblance to that of the natural languages. Following the publication of the team's findings in the May edition of Science, Rao and colleagues are studying longer strings of characters than they previously examined. "If there are patterns, we could come up with grammatical rules," Rao says. "That would in turn give constraints to what kinds of language families" the Indus script might belong to.

View Full Article

Blog: New Technology to Make Digital Data Self-Destruct

New Technology to Make Digital Data Self-Destruct
New York Times (07/20/09) Markoff, John

University of Washington computer scientists have developed software that enables electronic documents to automatically destroy themselves after a certain period of time. The researchers say the software, dubbed Vanish, will be needed as an increasing number of personal and business documents are moved from being stored on personal computers to centralized machines or servers as the cloud computing trend grows. The concept of having digital information disappear after a period of time is not new, but the researchers say they have developed a unique approach that relies on "shattering" an encryption key that is widely distributed across a peer-to-peer file sharing system. Vanish uses a key-based encryption system in a new way, allowing for a decrypted message to be automatically re-encrypted at a specific point in the future without fear that a third party will be able to access the key needed to read the message. The researchers say that pieces of the key "erode" over time as they are gradually used less and less. To make the keys erode, Vanish uses the structure of peer-to-peer file systems, which are based on millions of personal computers that join and leave the network at will, creating frequently changing Internet addresses, making it incredibly difficult for an eavesdropper to reassemble the key because it is never held in a single location. A major advantage of Vanish is that it does not rely on the integrity of third parties, as other systems do.

View Full Article

Blog Archive