Monday, January 31, 2011

Blog: Testable System Administration


Testable System Administration
Download PDF version of this article

by Mark Burgess | January 31, 2011
Topic: System Administration
    Testable System Administration

Models of indeterminism are changing IT management.

MARK BURGESS


The methods of system administration have changed little in the past 20 years. While core IT 
technologies have improved in a multitude of ways, for many if not most organizations system 
administration is still based on production-line build logistics (aka provisioning) and reactive 
incident handling—an industrial-age method using brute-force mechanization to amplify a manual 
process. ... 
    ... Experienced system practitioners know deep down that they cannot think of system 
administration as a simple process of reversible transactions to be administered by hand; yet it is 
easy to see how the belief stems from classical teachings. ... At least half of computer science stems 
from the culture of discrete modeling ...To put it quaintly, “systems” are raised in 
laboratory captivity under ideal conditions, and released into a wild of diverse and challenging 
circumstances. Today, system administration still assumes, for the most part, that the world is simple 
and deterministic, but that could not be further from the truth. 
    ... In a test-driven approach, system state is regulated by continual reappraisal at a microscopic level, like having a groundskeeper watch continuously over an estate, plucking the weeds or applying a lick 
of paint where needed. Such an approach required the conceptual leap to a computable notion of 
maintenance. Maintenance can be defined by referring to a policy or model for an ideal system state. 
If such a model could somehow be described in terms of predictable, actionable repairs, in spite of 
environmental indeterminism, then automating maintenance would become a simple reality. ...
    The term compliance is often used today for correctness of state with respect to a model. If a system 
deviates from its model, then with proper automation it self-repairs, 2,4  somewhat like an autopilot 
that brings systems back on course. ... [T]hink about IT... in terms of goals rather than “building projects” ...
View Full Article

Blog: I, Algorithm: A New Dawn for Artificial Intelligence

I, Algorithm: A New Dawn for Artificial Intelligence
New Scientist (01/31/11) Anil Ananthaswamy

The field of artificial intelligence (AI) is undergoing a revival, spurred by probabilistic programming that merges classic AI's logical principles with the power of statistics and probability. The key to probabilistic reasoning is a Bayesian network, a model comprised of various random factors, each with a probability distribution that is dependent on every other factor. Given the value of one or more factors, the network enables the inference of all the other factors' likely values. The development of algorithms for Bayesian networks that could use and learn from existing data began in the mid 1990s, and these new algorithms were capable of learning models of greater complexity and accuracy from much less data, unlike artificial neural networks. Still, Bayesian networks are insufficient for modern AI challenges on their own because they are incapable of building arbitrarily complex constructions out of simple components, which is where the incorporation of logic comes in. At the vanguard of probabilistic programming are computer languages that use both Bayesian networks and logic programming. In addition to the development of fast and flexible inference algorithms, researchers face the challenge of improving AI systems' learning ability, whether from existing data or from the real world using sensors.

View Full Article

Blog: How Watson Works: A Conversation With Eric Brown, IBM Research Manager

How Watson Works: A Conversation With Eric Brown, IBM Research Manager
KurzweilAI.net (01/31/11) Amara D. Angelica

IBM's deep Question Answering (DeepQA) system, codenamed Watson, will compete against human champions in Jeopardy! tournaments in February. IBM research manager Eric Brown says that "we're basically using Jeopardy! as a benchmark or a challenge problem to drive the development of [DeepQA] technology, and as a way to measure the progress of the technology." Brown says Watson can comprehend a broad diversity of questions using open-ended natural language and provide highly accurate answers, at least within the confines of the game. Developing a dialogue system that facilitates more natural interaction is an area that researchers want to investigate, especially as DeepQA is applied to other domains, Brown says. The direction that researchers are taking DeepQA technology in involves automatic generation of hypotheses and the compilation of evidence to support or disprove the hypotheses, and then evaluation of the evidence through an open, pluggable architecture of analytics, followed by the combination and weighing of the results to assess the hypotheses and make recommendations. Brown says IBM's initial focus for DeepQA is the medical or health care domains. Other applications currently under consideration include help desk, tech support, and business intelligence.

View Full Article

Friday, January 28, 2011

Blog: A Clearer Picture of Vision

A Clearer Picture of Vision
MIT News (01/28/11) Larry Hardesty

Massachusetts Institute of Technology researcher Ruth Rosenholtz recently presented a new mathematical model of how the brain summarizes the content of retinal images. The model can predict the visual system's failure on certain types of image-processing tasks, a sign that it captures some aspect of human cognition. Rosenholtz's model is designed to deal with the reduced accuracy of vision on the periphery, applying statistical formulas to "patchy" vision fields. It includes statistics on the orientation of features, feature size, brightness, and color. The new technique is very efficient, because it can store 1,000 statistics on each patch in the visual field, but with only one-90th as many virtual neurons as the brain would need to store the same amount of data. In testing, the degree of difference between the statistics of different patches is a good indicator of how quickly subjects can find a target object. Rosenholtz says the model is based on a group of statistics commonly used to describe visual data in computer vision research.

View Full Article

Thursday, January 27, 2011

Blog: Review: My Amazon Kindle Single publishing experiment

Review: My Amazon Kindle Single publishing experiment

Amazon launched its Kindle Single store and my experiment in self-book publishing went along for the ride. Here's a look at what you need to know.

READ FULL STORY

Monday, January 24, 2011

Blog: Cloud Robotics: Connected to the Cloud, Robots Get Smarter

Cloud Robotics: Connected to the Cloud, Robots Get Smarter
IEEE Spectrum (01/24/11) Erico Guizzo

Carnegie Mellon University (CMU) researchers are developing robots that use cloud computing to obtain new information and data. The method, known as cloud robotics, allows robots to offload compute-intensive tasks such as image processing and voice recognition. Cloud robotics could lead to lighter, cheaper, and faster robots, says CMU professor James Kuffner. He is working with Google to develop cloud robotics systems that involve "small mobile devices as Net-enabled brains for robots." In the future, cloud-enabled robots could become standardized, leading to an app store for robots, Kuffner says. "Coupling robotics and distributed computing could bring about big changes in robot autonomy," says Jean-Paul Laumond with France's Laboratory of Analysis and Architecture of Systems. Kuffner sees a future in which robots will feed data into a knowledge database, sharing their interactions with the world and learning about new objects, places, and behaviors.

View Full Article

Wednesday, January 19, 2011

Blog: Challenging the Limits of Learning [... language acquisition]

Challenging the Limits of Learning
American Friends of Tel Aviv University (01/19/11)

Researchers at Tel Aviv University have developed software that models the human mind to explore language acquisition, and report that the early results suggest that people actually learn language. The program learns basic grammar using a bare minimum of cognitive machinery, similar to what a child might have, says Tel Aviv's Roni Katzir. He used unsupervised learning to program his computer to learn simple grammar on its own, and the machine-learning technique enabled the program to see raw data and conduct a random search to find the best way to characterize what it sees. The computer looked for the simplest description of any data using the Minimum Description Length criterion. Katzir was able to explore what kinds of information the human mind can acquire and store unconsciously, and whether a computer can learn in a similar manner. He believes the research has applications in technologies such as voice-dialogue systems, or for teaching robots how to read visual images. "Many linguists today assume that there are severe limits on what is learnable," Katzir says. "I take a much more optimistic view about those limitations and the capacity of humans to learn."

View Full Article

Monday, January 17, 2011

Blog: Beating the Competition [a single new connection can dramatically enhance the size of a network ...]

Beating the Competition
Max Planck Gessellschaft (01/17/11) Birgit Krummheuer

Researchers from the Max Planck Institute for Dynamics and Self-Organization, the Bernstein Center for Computational Neuroscience Gottingen, and the University of Gottingen have mathematically described the influence of single additional links in a network. Using computer simulations, the researchers tracked the growth of networks link by link. Their study found that a single new connection can dramatically enhance the size of a network, whether the connection is an additional link in the Internet, a new acquaintance for a circle of friends, or a connection between two nerve cells in the brain. The researchers focused on the intermediate growth stage, when elements are beginning to sporadically connect into small groups, but before the entire system is linked. After a certain number of new links, there is a sudden growth spurt, as the size of the largest network in the system is enhanced dramatically. The researchers would like to determine which forms of competition in natural systems from biology and physics imply this rapid growth and study the consequences of growth spurts.

View Full Article

Thursday, January 13, 2011

Blog: IBM Computer Gets a Buzz on for Charity Jeopardy!

IBM Computer Gets a Buzz on for Charity Jeopardy!
Associated Press (01/13/11) Jim Fitzgerald

IBM's Watson computer beat former Jeopardy! champions Ken Jennings and Brad Rutter in a 15-question practice round in which the hardware and software system answered about half of the questions and got none of them wrong. Watson, which will compete in a charity event on Jeopardy! against Jennings and Rutter on Feb. 14-16, recently received a buzzer, the finishing touch to a system that represents a huge step in computing power. "Jeopardy! felt that in order for the game to be as fair as possible, just as a human has to physically hit a buzzer, the system also would have to do that," says IBM's Jennifer McTighe. Watson consists of 10 racks of IBM servers running the Linux operating system and has 15 terabytes of random-access memory. The system has access to the equivalent of 200 million pages of content, and can mimic the human ability to understand the nuances of human language, such as puns and riddles, and answer questions in natural language. The practice round was the first public demonstration of the computer system. IBM says Watson's technology could lead to systems that can quickly diagnose medical conditions and research legal cases, among other applications.

View Full Article

Blog: Fruit Fly Nervous System Provides New Solution to Fundamental Computer Network Problem

Fruit Fly Nervous System Provides New Solution to Fundamental Computer Network Problem
Carnegie Mellon News (PA) (01/13/11) Byron Spice

Researchers at Carnegie Mellon and Tel Aviv universities are drawing on inspiration from a fruit fly's nervous system to develop models for distributed computer networks. A fruit fly's nervous system cells organize themselves so that a few cells act as leaders that connect the other nerve cells together. "It is such a simple and intuitive solution, I can't believe we did not think of this 25 years ago," says Tel Aviv's Noga Alon. The researchers found that the fly's nervous system has an efficient design for networks in which the number and position of nodes is unclear, such as in wireless sensor networks, environmental monitoring, and in systems for controlling swarms of robots. In computing, developers have created distributed systems using a small set of processors that can communicate with all of the other processors in the network, a group known as the maximal independent set (MIS). However, computer scientists have struggled with determining the best way to choose an MIS, but after studying the fly's nervous system, the researchers created a computer algorithm that provides a fast solution to the MIS problem. "The run time was slightly greater than current approaches, but the biological approach is efficient and more robust because it doesn't require so many assumptions," says Carnegie Mellon professor Ziv Bar-Joseph.

View Full Article

Thursday, January 6, 2011

Blog: CMU Research Finds Regional Dialects Are Alive and Well on Twitter

CMU Research Finds Regional Dialects Are Alive and Well on Twitter
Carnegie Mellon News (PA) (01/06/11) Byron Spice

Regional slang is evident in Twitter postings, but such dialects appear to be evolving in social media, as determined by a Twitter word usage analysis method developed by Jacob Eisenstein and colleagues in Carnegie Mellon University's (CMU's) Machine Learning Department. Eisenstein says Twitter offers a new means of examining regional lexicon, since tweets are informal and conversational, while tweeters using cell phones can opt to tag their messages with global positioning system coordinates. The CMU researchers collected seven days' worth of Twitter messages in March 2010, and chose geotagged messages from Twitter users who wrote at least 20 messages. From this they generated a database of 9,500 users and 380,000 messages. The team used a statistical model to identify regional variation in word use and topics that could predict the geographical whereabouts of a microblogger in the continental United States with a median error of approximately 300 miles. The researchers can only speculate on the profiles of the microbloggers, and Eisenstein says it is reasonable to assume that users who send many tweets via cell phone are younger than average Twitter users--and this appears to be mirrored by the subjects these users tend to discuss. Through automated analysis of Twitter message streams, linguists can observe the real-time evolution of regional dialects.

View Full Article

Wednesday, January 5, 2011

Blog: Apache Object-Oriented Data Project Goes Top-Level

Apache Object-Oriented Data Project Goes Top-Level
eWeek (01/05/11) Darryl K. Taft

The Apache Software Foundation (ASF) announced that its Object-Oriented Data Technology (OODT) has moved from the Apache Incubator to become a top-level project (TLP). Apache OODT is middleware and metadata used for computer processing workflow, hardware, and file management. The OODT system enables distributed computing and data resources to be searched by any user. The platform is used at the U.S. National Cancer Institute's Early Detection Research Network, as well as by several programs at the U.S. National Aeronautics and Space Administration (NASA). "OODT had been successfully operating within the [Jet Propulsion Laboratory] for years; the time had come to realize the benefits of open source in the broader, external community," says Apache OODT's Chris Mattmann. OODT is the first NASA-developed project to become an ASF TLP. "The Apache Software Foundation has a long history of software innovation through collaboration--the larger the pool of potential contributors, the more innovation we see," Mattmann says.

View Full Article

Tuesday, January 4, 2011

Blog: U.Va. Computer Scientists Look to Biological Evolution to Strengthen Computer Software

U.Va. Computer Scientists Look to Biological Evolution to Strengthen Computer Software
UVA Today (01/04/11) Zak Richards

Computer scientists at the universities of Virginia and New Mexico recently received a $3.2 million U.S. Defense Advanced Research Projects Agency grant to develop more resilient software systems based on the biological concepts of immunity and evolution, with the goal of stopping cyberattacks. The researchers say the technology could have applications in a wide range of products, including laptops, cell phones, anti-lock brakes, and artificial-heart pumps. "In biological systems, the skin and the immune system work together to fight off threats, and diverse populations mean that not every individual is vulnerable to the same disease," says Virginia professor Westley Weimer. The researchers are using genetic programming techniques to develop software that can defend against attacks and self-repair, and then pass those traits onto later generations of the software. The researchers want to ensure that the software can automatically diversify programs, which will improve resiliency. "With millions of people using the same programs, it's also easier for a single virus or invader to find just one attack surface and destroy everything," Weimer says. The researchers also want to develop adaptable software that can learn to fend off attacks that come with the creation of new programs. The software also will use a distributed, decentralized search technique based on the behavior of ants, says New Mexico professor Melanie Moses.

View Full Article

Monday, January 3, 2011

Blog: Mathematical Model Shows How Groups Split Into Factions

Mathematical Model Shows How Groups Split Into Factions
Cornell Chronicle (01/03/11) Bill Steele

A mathematical model of how social networks evolve into opposing factions under strain has been developed by Cornell researchers. Earlier models of structural balance demonstrate that under suitable conditions, a group conflict will facilitate a split into just two factions, while the new model indicates how friendships and rivalries change over time and who ends up on each side. The model consists of a simple differential equation applied to a grid of numbers that can stand for relationships between persons, countries, or corporations. Cornell's Seth Marvel says people may forge alliances based on shared values, or may consider the social effects of allying with a specific individual. "The model shows that the latter is sufficient to divide a group into two factions," Marvel says. The model traces the division of groups to unbalanced relationship triangles that trigger changes that spread throughout the entire network. All too frequently the final state is comprised of two factions, each with all favorable links among themselves and all negative connections with members of the opposing faction. The model shows that if the average strength of ties across the entire system is positive, then it evolves into a single, all-positive network.

View Full Article

Blog: The Surprising Usefulness of Sloppy Arithmetic

The Surprising Usefulness of Sloppy Arithmetic
MIT News (01/03/11) Larry Hardesty

Massachusetts Institute of Technology (MIT) researchers have developed a computer chip that can perform thousands of calculations simultaneously using imprecise arithmetic circuits. MIT visiting professor Joseph Bates and graduate student George Shaw started by assessing an algorithm used for object-recognition systems that distinguishes foreground and background components in video frames. The researchers rewrote the algorithm so that the results were either raised or lowered by less than 1 percent. The researchers say the chip design works especially well for image and video processing applications. Currently, computer chips have four or eight cores, but the MIT chip has 1,000 smaller cores that do not produce precise results. The MIT chip also is 1,000 times more efficient than conventional chips because each core only communicates with its immediate neighbors. The researchers say the chip will be good at solving common computer science problems, such as the near-neighbor search, and computer analysis of protein folding. Intel's Bob Colwell says the chip's most promising application could be with human-computer interactions.

View Full Article

Blog Archive