Monday, March 31, 2008

Web: Mashup Security

Mashup Security
Technology Review (03/31/08) Naone, Erica

As a growing number of tools are developed to help people create their own online mashups, experts are examining how to eliminate mashup security risks. OpenAjax Alliance cofounder David Boloker says that as mashups become more complex they start incorporating computer code from multiple sources, which may include insecure code that could jeopardize a company's or user's systems. Web browsers were not designed with mashups in mind, Boloker says. Browsers contain a security feature called the same-origin policy that is intended to keep malicious code hosted on one site from obtaining information from another site. However, same-origin security forces Web applications to either sacrifice security or functionality, says Microsoft Research's Helen Wang. Wang says that when a Web site creator embeds code written by a third party the same-origin policy no longer offers any protection. She has been working on solutions that provide a way for browsers to recognize code that comes from a third party and to treat that code differently. One solution is to enclose third-party code in a "sandbox" tag, which would allow the Web site to use the code but treat it as unauthorized content, with no authority outside the sandbox. IBM recently released a security tool called SMash that allows content from multiple sources to be displayed on a single page, and allows them to communicate safely. A secure communication channel monitors information sent between tools while maintaining their separate identities and sets of permissions.
Click Here to View Full Article

Research: Conversations: Jon Bentley; On Algorithm Design and Analysis

Conversations: Jon Bentley
Dr. Dobb's Journal (03/31/08) Blake, Deirdre

Avaya Labs research scientist Jon Bentley is currently working on a mathematical theory of authentication that can quantify the assurance of security secrets. Bentley says that his 1976 Ph.D. thesis included a section in which he attempted to describe the process of designing an algorithm, and that he believes his approach has stood the test of time. "The principles include generalizing, using high-level and abstract description of algorithms, examining degenerate cases, and employing standard speed-up tricks," says Bentley. Before getting immersed in the details of designing an algorithm, Bentley says, the most important step is to find out what the real problem is. Bentley says that sometimes algorithm design and algorithm analysis proceed hand-in-hand, such as when students design an algorithm so that it can be analyzed. The purpose of algorithm design is to develop a good algorithm, while the purpose of algorithm analysis is to understand how good an algorithm is. "Sometimes, though, people design algorithms and report that they are fast without analyzing their runtime," he says. "What a delightful challenge for an algorithm analyst! I've walked both sides of that street. My most frequently cited paper was for the 1975 ACM Undergraduate Student Paper competition; it introduced multidimensional binary search trees, which Don Knuth called 'k-d trees.' I described an algorithm for nearest-neighbor searching, but I couldn't even begin to analyze it. Many folks have made great progress on the analysis since then." He says programming is subtle, that we must learn to be "humble programmers," and that there are lot of tools available, including precise specifications, formal methods, and extensive tests. Bentley adds, however, that one of the best tools is the eyes of really smart friends.
Click Here to View Full Article

Saturday, March 29, 2008

Web: HTML5 Jumps Off the Drawing Board

HTML5 Jumps Off the Drawing Board
InformationWeek (03/29/08) Lee, Mike

The first public working draft of the HTML5 specification was announced by the World Wide Web Consortium in mid January, although this does not mean the W3C has abandoned XHTML 2.0, which is still being developed. "The HTML5 specification is a good step because it's a fairly realistic one," says Opera Software's Charles McCathie Nevile. "It doesn't aim to change the world in a radical way." Among HTML5's revisions are updates to simplify interactive Web development; header, footer, section, article, nav, and dialogue capabilities to more clearly split page sections; and a "canvas" with a corresponding 2D drawing application programming interface that supports dynamic graphics and animation on the spur of the moment. Components that HTML5 removes to eliminate usability problems include frames and framesets and most presentational attributes. HTML5 includes APIs that support direct provisions for audio and video, client-side persistent storage with both key/value and SQL database support, cross-document messaging, and offline-application, editing, drag-and-drop, and network APIs. HTML5 design principles focus on the support of existing content, compatibility, interoperability, universal access, and utility. The W3C expects the ratification of the full HTML5 recommendation in the third quarter of 2010, but Firefox, Opera, Internet Explorer, and Safari already offer pieces of support for the specification.
Click Here to View Full Article

Research: Now Blooming: Digital Models

Now Blooming: Digital Models
Washington Post (03/29/08) P. B1; Ruane, Michael E.

Virginia Tech master's students Vidhya Dass and Elizabeth Brennan are using artificial neural networks, evolutionary computations, the Arrhenius equation, linear regression, and fuzzy logic to predict when Washington, D.C.'s cherry trees will bloom. Dass and Brennan wanted to see if a computer model could do as well or better than the National Park Service's seasoned horticulturalist, who analyzes such factors as early flowering elms, maples, and cornelian cherry dogwoods, as well as the weather and other recurring clues. An accurate computer model could make it easier for officials to plan the National Cherry Blossom Festival and for tourists to plan visits. "We hoped to create a model that would allow the best prediction with the minimum amount of input," Brennan says. Dass and Brennan say they focused most of their efforts on computational intelligence and essentially tried to mimic a human brain. The students point out that computer modeling is widely used in to predict soybean flowering, corn yields, and aspects of tomato and lettuce farming. They used past peak dates and previously recorded data to see which computer models were the most accurate. The most accurate models matched past peak dates to within a few days, and some models were as much as three days closer to the peak bloom date than the park service's prediction for that year.
Click Here to View Full Article

Friday, March 28, 2008

Research: The Future of Computing--Carbon Nanotubes and Superconductors to Replace the Silicon Chip

The Future of Computing--Carbon Nanotubes and Superconductors to Replace the Silicon Chip
Institute of Physics (03/28/08)

The Institute of Physics Condensed Matter and Materials Physics conference at the University of London's Royal Holloway College will highlight the need to one day replace the silicon chip with new technologies in order to support ever-faster and more powerful computing. Among potential replacement technologies to be discussed at the conference are carbon nanotubes, whose conductive properties have led to their proposal as molecular-scale circuitry wires. Leeds University researchers led by Bryan Hickey have developed a method to expose a nanotube's structure and electrical characteristics so that it can accurately be positioned on a surface. "With this technique we can make carbon nanotube devices of a complexity that is not achievable by most other means," says Leeds team member Chris Allen. Also speaking at the conference will be Delft University of Technology's Hans Mooij, who will talk about progress in the use of superconductors to greatly increase computer power by tapping the unique properties of quantum physics. He will detail work to make practical quantum computers using an approach to induce communication between three quantum bits (qubits), a milestone that would enormously help achieve scalability. Meanwhile, Raymond Simmons of the National Institute of Standards and Technology will present his own work with superconductor loops, which can function as qubits when placed in quantum superposition states. He will describe the first demonstration of data transmitted between two superconducting qubits, which proves that such elements can serve as a quantum-computing memory and a "bus" for qubits to communicate with one another.
Click Here to View Full Article

Thursday, March 27, 2008

Research: An Interview With Bjarne Stroustrup

An Interview With Bjarne Stroustrup
Dr. Dobb's Journal (03/27/08) Buchanan, James

C++ creator Bjarne Stroustrup says in an interview that next-generation programmers need a thorough education that covers training and understanding of algorithms, data structures, machine architecture, operating systems, and networking. "I think what should give is the idea that four years is enough to produce a well-rounded software developer: Let's aim to make a five- or six-year masters the first degree considered sufficient," he says. Before writing a software program, Stroustrup recommends that a programmer consult with peers and potential users to get a clear perspective of the problem domain, and then attempt to build a streamlined system to test the design's basic ideas. Stroustrup says he was inspired to create a first programming course to address what he perceived as a lack of basic skills for designing and implementing quality software among computer science students, such as the organization of code to ensure it is correct. "In my course I heavily emphasize structure, correctness, and define the purpose of the course as 'becoming able to produce code good enough for the use of others,'" he says. Stroustrup thinks programming can be vastly improved, especially by never losing sight of how important it is to produce correct, practical, and well-performing code. He describes a four-year undergraduate university course in computer science he helped design as having a fairly classical CS program with a slightly larger than usual software development project component in the first two years of study. Courses would cover hardware and software, discrete math, algorithms and data structures, operating and network systems, and programming languages, while a "programming studio" would be set up to expose students to group projects and project management.
Click Here to View Full Article

Wednesday, March 26, 2008

Software: DSLs Lead Development Paradigm Shift; domain-specific languages

DSLs Lead Development Paradigm Shift
eWeek (03/26/08) Taft, Darryl K.

The software development community needs to move beyond its use of static, procedural languages and frameworks and start using language-oriented programming. ThoughtWorks senior application architect Neal Ford, speaking at TheServerSide Java Symposium on March 26, says domain-specific languages (DSL) are designed for specific tasks. Ford says ThoughtWorks colleague Ola Bini envisions a future stack of basic programming tools consisting of a "stable language" at the bottom level, with dynamic languages built on top of that, and DSLs added at the top layer. Ford says that DSLs improve the software development process by "eliminating noise," and that programmers experienced in dynamic languages tend to build DSLs on top of their low-level language. "Using DSLs evolves the way we build and use frameworks, escalating our abstraction levels closer to the problem domains and farther from implementation details," Ford says.
Click Here to View Full Article

Tuesday, March 25, 2008

Security: NIST Unveils Tool to Foil Attacks via DNS

NIST Unveils Tool to Foil Attacks via DNS
Government Computer News (03/25/08) Campbell, Dan

National Institute of Standards and Technology (NIST) network researchers Scott Rose and Anastase Nakassis have written a paper that introduced a method federal systems administrators can use to protect their systems from the attacks launched over the Domain Name System (DNS). Rose and Nakassis say that DNS security extensions (DNSSEC) that are originally meant to protect DNS zone data contain an unintentional side effect that enables an attack precursor known as "zone enumeration." Although zone enumeration is possible without DNSSEC, the traditional methods of enabling zone enumeration are often impractical because they use time-consuming or processor-intensive brute force techniques that are often repelled by intrusion detection systems. Rose and Nakassis also note that there are several techniques that allow networks to realize the intended authentication and integrity benefits of DNSSEC while simultaneously "reducing DNS information leakage." Such techniques are important because the need to protect network operations with methods offered by DNSSEC will only increase as DNS becomes more and more important. In addition, the techniques could improve DNSSEC authentication and integrity protection, which would in turn protect DNS zones and stop attempts to compromise data.
Click Here to View Full Article

Monday, March 24, 2008

Research: Back to Basics: Algorithms

Back to Basics: Algorithms
Computerworld (03/24/08) Vol. 42, No. 13, P. 30; Anthes, Gary

Superior algorithms are distinguished by speed, reliability, easy understanding and modifiability, efficient resource usage, and above all, elegance. As computers penetrated the business sector in the 1960s, the computing world was hit by the double blow of bugs--computer errors stemming from programmer errors--and sorting, which was required of virtually every major application. These challenges brought new recognition to the importance of algorithms among IT people, who realized that simple algorithms could be easily coded, debugged, and modified. But in many cases simple algorithms did not boast the most efficiency, so programmers devised algorithmic methods for evaluating the efficiency and overall superiority of algorithms. Bubblesort was an easily understandable but inefficient algorithm that read through the file to be sorted and looked successively at pairs of adjacent records to see if they needed to be swapped to be put in correct order, the idea being that in-sequence records would "bubble up" to the top until eventually the entire file was in sequence. A much more elegant improvement over Bubblesort was the Quicksort algorithm, which can take far less time to sort a file by selecting any element or "pivot" from the list, comparing every other element to the pivot in order to put things in correct order, and then repeating this process on successfully smaller groups until the entire list is in sequence.
Click Here to View Full Article

Security: Researchers Secure the Browser

Researchers Secure the Browser
eWeek (03/24/08) Vol. 25, No. 10, P. 16; Naraine, Ryan

Researchers at the University of Illinois at Urbana-Champaign are constructing Opus Palladianum (OP), a new Web browser designed to prevent hacker attacks by partitioning the browser into smaller subsystems and using simple and explicit communication between subsystems. "[The Web] has become a platform for hosting all kinds of important data and businesses, but unfortunately, [existing] browsers haven't evolved to deal with this change and that's why we have a big malware problem," says University of Illinois professor Samuel King, who conceived of OP. King says three unique security features will be employed to demonstrate the browser architecture design's utility. Those components include flexible security policies that accommodate the use of external plug-ins without making third-party developers responsible for security; formal techniques to show that the address bar displayed within the browser user interface always displays the proper address for the current Web page; and a browser-level information-flow tracking system that allows browser-based attacks to be dissected postmortem. OP is currently comprised of five main subsystems--the Web page subsystem, a network component, a storage component, a user-interface component, and a browser kernel--which all run within separate OS-level processes, King says. Communication between each subsystem and between processes, and interactions with the underlying operating system, are handled by the browser kernel. "The browser kernel implements message passing using OS-level pipes, and it maintains a mapping between subsystems and pipes," King says. He says the long-term goal is to devise a cross-platform Webkit version that will be distributed to the open-source community.
Click Here to View Full Article

Friday, March 21, 2008

Security: Defending Laptops from Zombie Attacks

Defending Laptops from Zombie Attacks
Technology Review (03/21/08) Greene, Kate

Laptop-based security software that adjusts to how an individual utilizes the Internet so that the detection of malicious activity is more dynamic and personalized has been developed by Intel researchers. The software targets corporations that pass out laptops and mobile devices to workers, since IT departments typically install homogeneous security software on all their hardware, which partly explains why security breaches are so profuse, according to Intel Research Berkeley researcher Nina Taft. Most IT departments deploy security software with a component that analyzes the stream of Internet traffic flowing into and out of a computer, and that suggests infection when traffic exceeds a preset limit. However, this method can incorrectly target people who habitually send out large volumes of information while ignoring traffic that falls below the threshold that may harbor malevolent activity without the sender's knowledge. Intel researchers have devised algorithms capable of more subtle evaluations, including one that creates individualized traffic thresholds by monitoring a person's Internet use through standard statistical and machine-learning techniques, and another that assesses how people's Internet usage changes throughout the day. Another set of algorithms uses the same behavioral principles to study communication between laptops and other devices on the Internet to detect the presence of botnets. "I think the basic takeaway is, if you can be really precise in capturing user behavior, you can make the work of the attackers much harder," notes Taft. Georgia Institute of Technology professor Nick Feamster attributes the lack of application of the behavioral security strategy to laptops to the absence of an automated way to develop personalized rules.
Click Here to View Full Article

Thursday, March 20, 2008

Web: Can We Fix the Web?

Can We Fix the Web?
InternetNews.com (03/20/08) Kerner, Sean Michael

During a keynote speech at the AjaxWorld conference, Douglas Crockford, creator of JavaScript Object Notation and a senior JavaScript architect at Yahoo, said the Web is in serious trouble, and the question is no longer should we fix it, but if we can. Crockford said browsers were not designed to do "all of this Ajax stuff," and Ajax only works because people have found ways to make Ajax work despite its limitations. "The number one problem with the Web is security," Crockford said. "The browser is not a safe programming environment. It is inherently insecure." Part of the problem is what Crockford called the "Turducken problem," or that people are trying to stuff the turkey with the duck. Crockford said the many programming languages on the Web can be built inside of each other, which can lead to problems. Crockford argued that these are not Web 2.0 problems, but were present in Netscape 2.0 in 1995. The security problems are based on three core items, Crockford said: JavaScript, DOM (document object model), and cookies. Crockford says JavaScript's global object is the root cause of all cross-site scripting attacks, while DOM is problematic because all nodes are linked to all other nodes on a network creating an insecure model, and cookies can be misused as tokens for authority. Crockford also blamed browser vendors for introducing new insecure JavaScript features, and said ultimately that JavaScript needs to be replaced with a secure language.
Click Here to View Full Article

Wednesday, March 19, 2008

Research: How to Make Smarter Software

How to Make Smarter Software
Forbes (03/19/08) Hardy, Quentin

Numenta founder Jeffrey Hawkins says artificial intelligence has come up short because it has concentrated too heavily on the results of the brain's operations without focusing on the general way those outcomes occur, and he is devoted to the development of unique software modeled after neocortical architecture and functionality in the hope that such an advancement will eventually lead to software and hardware that can truly emulate human intelligence. In 2007 Numenta released to developers an open-source version of software oriented around "hierarchical temporal memory," a theory formulated by Hawkins about how the human brain deals with incoming data. Hawkins says the brain is structured into a hierarchy of neuronal columns that absorb basic sensory input and sort it into patterns organized around time and space. The initial patterns are passed to more neurons that aggregate the information and feed it to more aggregators until the patterns become generalizations that lead to future projections. The Numenta software is designed to mimic this arrangement by forming a hierarchy of software nodes that seek to recognize patterns. Unlike neural network software, the Numenta software has no fixed number of levels because general topology and the use of time is altered with the desired outcome. It is Hawkins' hope that users will learn how to apply the software to the construction of smarter devices.
Click Here to View Full Article

Monday, March 17, 2008

Research: Communities and the Networks That Define Them

Communities and the Networks That Define Them
Dr. Dobb's Journal (03/17/08) Erickson, Jon

An algorithm that is capable of automatically identifying communities and their structures in different networks is the focus of a new paper by Weixiong Zhang of Washington University and Jianhua Ruan of the University of Texas at San Antonio. All disparate communities have networks that define their structure, Zhang says. As part of the natural division in the community structure of networks, the vertices in each subnetwork are highly interconnected but not as strongly with the rest of the network. Researchers tend to believe that each community may correspond to a fundamental functional unit. Zhang teamed up with Ruan to develop the algorithm, which exceeds similar algorithms in scalability and is capable of detecting communities at a finer scale and with higher accuracy. The algorithm has also been used to conduct genomics-related research.
Click Here to View Full Article

Security: Researchers Create Next-Generation Software to Identify Complex Cyber Network Attacks

Researchers Create Next-Generation Software to Identify Complex Cyber Network Attacks
George Mason University (03/17/08) Edgerly, Jennifer

Researchers at George Mason University's Center for Secure Information Systems have developed CAULDRON, software that can prevent successful cyber attacks by identifying possible vulnerabilities in an organization's network. To protect an organization's networks, it is necessary to understand not only individual system vulnerabilities, but their interdependencies. "Currently, network administrators must rely on labor-intensive processes for tracking network configurations and vulnerabilities, which requires a great deal of expertise and is error prone because of the complexity, volume, and frequent changes in security data and network configurations," says university professor and center director Sushil Jajodia. "This new software is an automated tool that can analyze and visualize vulnerabilities and attack paths, encouraging 'what-if analysis.'" CAULDRON allows for the transformation of raw security data into roadmaps that allow users to prepare for attacks, manage vulnerabilities, and have real-time situational awareness. CAULDRON can show all possible attack paths into a network, can provide informed risk analysis, and can analyze vulnerability dependencies. Jajodia says the software is applicable to almost any organization with a network and resources that need protecting. The Federal Aviation Administration recently installed CAULDRON in their Cyber Security Incident Response Center, helping them prioritize security problems, reveal unseen attack paths, and protect large numbers of attack paths.
Click Here to View Full Article

Friday, March 14, 2008

Security: When browsers attack

Defeating the Same Origin Policy part 1

March 14th, 2008

Posted by Nathan McFeters @ 7:27 am

One of the guiding principles of the Internet - the Same Origin Policy - secures your Web browsing experience from a variety of devastating attacks, including rogue Java applets. Is it time to be afraid?

READ FULL STORY

Web: Voting for More Than Just Either-Or

Voting for More Than Just Either-Or
MIT News (03/14/08) Chandler, David

MIT researchers are developing Selectricity, software that could make ranking systems as easy to use as traditional voting systems, creating results that would satisfy a greater portion of the population. Selectricity has been available online as a free service since last fall and is about to switch to an upgraded version with more advanced options. Using Selectricity, anyone can go to the Web site and set up a "Quickvote" in just a few seconds, and users anywhere can access the poll and vote, creating instant results. There is also an ultra-simple version that uses text messaging for voting by cell phones. Although the software is being used for simple tasks such as deciding where to go to dinner or when to hold a meeting, it is sophisticated enough to handle real elections. In February, a beta version of the upgraded software was used by a national student organization to elect their first board of directors, with each of the 16 campus chapters of the Students for Free Culture group receiving an equal vote to select five members for their governing board from a field of 13 candidates. In the election, the candidate that received the most first-place votes, also received the most last- or near-last-place votes, meaning in a traditional election the candidate would have won despite being unpopular with the majority of the voters.
Click Here to View Full Article

Wednesday, March 12, 2008

Software: Web Mashups Made Easy

Web Mashups Made Easy
Technology Review (03/12/08) Greene, Kate

Intel Research's Mash Maker project is working to make it possible for people to use their Web browsers to combine information from different sites. For example, if someone was looking for apartments on Craigslist, they could add information about nearby restaurants from Yelp, or put the apartment listings on Google Map. Mash Maker's goal is to allow people to create their own custom-made Web. "Right now, the Web is a collection of islands; each has its own information, but they aren't really interconnected and personalized for you," says Intel researcher Robert Ennals. "We're trying to move to where the Web is a single source of interconnected knowledge, presenting information that you want to see the way you want to see it." Other companies have similar projects in development. Last year, Microsoft introduced Popfly, a programming environment that makes it easy for nonexperts to build mashups. Yahoo Pipes is another project that allows people to combine data from a variety of sources. IBM is working on Lotus Mashups, a program designed to enable users to combine data from different business applications. "The Holy Grail is codeless programming," says Microsoft's John Montgomery. "We're all converging on this idea of end-user programming, which isn't really programming, coupled with community integration."
Click Here to View Full Article

Research: Algorithm Finds the Network - For Genes or the Internet

Algorithm Finds the Network - For Genes or the Internet
Washington University in St. Louis (03/12/08) Fitzpatrick, Tony

Washington University professor Weixiong Zhang and PhD student Jianhua Ruan have developed an algorithm that automatically identifies communities and their structures in various networks. Zhang says many complex systems can be represented as networks, including the genetic networks he studies, social networks, and the Internet itself. The community structure found in networks includes a natural division among the vertices in each subnetwork that are highly involved with each other, but connect less strongly than the rest of the network. A community in a genetic network usually contain genes with similar functions, while a community on the Web often corresponds to Web pages with similar topics. The researchers say their algorithm is more scalable than existing algorithms and can detect communities at a finer scale with greater accuracy. In genomics, the algorithm could be used by researchers to better identify and understand communities of genes and their networks, and how they interact to create diseases. In computing, the algorithm can determine how people interact in social networks and how scientists collaborate in scientific research.
Click Here to View Full Article

Tuesday, March 11, 2008

Web: Study: Digital Universe and Its Impact Bigger Than We Thought

Study: Digital Universe and Its Impact Bigger Than We Thought
Computerworld (03/11/08) Mearian, Lucas

In three years' time there will be a tenfold increase in the 180 exabytes of electronic data created and stored in 2006, according to a white paper from IDC. The report estimates that electronic receptacles for that data are expanding 50 percent faster than the data itself, and that information will be stored in over 20 quadrillion containers by 2011, producing a massive management conundrum for consumers as well as businesses. The bulk of the data consists of digital "shadows" such as surveillance photos, Web search histories, financial transaction journals, mailing lists, and so on. IDC chief research officer John F. Gantz says that a great deal of the data being created by consumers outside of enterprises will require enterprise protection, as 85 percent of that information sooner or later goes through a corporate asset. IDC acknowledges an underestimation of earlier digital figures for 2007, noting that the actual data total--281 exabytes--is 10 percent greater than it had projected earlier in its first "Digital Universe" study, owing to more rapid growth in digital cameras, televisions, and data, and improved comprehension of data replication. IT organizations will need to accommodate the digital universe's rapidly increasing size and sophistication by transforming their existing relationships with business units; driving the development of organization-wide policies for information governance such as security and retention of information, data access, and compliance; and expediting new tools and standards into the organization, from storage optimization, unstructured data search, and database analytics to virtualization and management and security tools.
Click Here to View Full Article

Monday, March 10, 2008

Research: Language of a Fly Proves Surprising; research may improve neural networks

Language of a Fly Proves Surprising
Los Alamos National Laboratory News (03/10/08) Rickman, James E.

Researchers have developed a way to view the world through the eyes of a fly and partially decode the insect's reactions to changes in the world around it. The research has changed scientists' understanding of neural networks and could provide the basis for intelligent computers that mimic biological processes. The researchers used tiny electrodes to tap into motion-sensitive neurons in the visual system of a blowfly. The fly was harnessed into a turntable-like mechanism that mimicked the kind of flight it might undergo when evading a predator or chasing another fly. The neurons' firing patterns were mapped with a binary code of ones and zeroes. The researchers found that the impulses were like a primitive, but very regular "language," with the neurons firing at precise times depending on what the fly's visual sensors were trying to tell it about its visual stimulus. Previous research showed irregular spikes in the neurons' firing, but this is now believed to be a way to conserve energy when there is little change in the fly's surroundings. The simulated flight creates significant change requiring regular neuron firing to process the information. "This may be one of the main reasons why artificial neural networks do not perform anywhere comparable to a mammalian visual brain," says Los Alamos physicist Ilya Nemenman, a member of the research team. The research could improve the analyses of satellite images and facial-pattern recognition.
Click Here to View Full Article

Saturday, March 1, 2008

Security: Privacy-aware Role Based Access Control (P-RBAC)

Privacy-aware Role Based Access Control
Qun Ni, Purdue University, USA, ni@cs.purdue.edu
Alberto Trombetta, Insubria University, Italy, alberto.trombetta@uninsubria.it
Elisa Bertino, Purdue University, USA, bertino@cs.purdue.edu
Jorge Lobo, IBM T.J. Watson, USA, jlobo@us.ibm.com

SACMAT’07, June 20-22, 2007, Sophia Antipolis, France.
Copyright 2007 ACM 978-1-59593-745-2/07/0006 ...$5.00.


1. INTRODUCTION
Privacy is today a key issue in information technology and has received increasing attention from consumers, companies, researchers and legislators. Legislative acts, such as Health Insurance Portability and Accountability Act (HIPAA) [25] for healthcare and Gramm Leach Bliley Act (GLBA) [26] for financial institutions, require enterprises to protect the privacy of their customers. Although enterprises have adopted various strategies to protect customer privacy and to communicate their privacy policies to customers, ... in these approaches there are not systematic mechanisms that describe how consumer personal data is actually handled after it is collected. Privacy protection can only be achieved by enforcing privacy policies within an enterprise’s online and offline data processing systems. Otherwise, enterprises’ actual practices might intentionally or unintentionally violate the privacy policies published at their websites.


Conventional access models, such as Mandatory Access Control (MAC), Discretionary Access Control (DAC), and Role Based Access Control (RBAC) [11, 22], are not designed to enforce privacy policies and barely meet privacy protection requirements[12], particularly, purpose binding (i.e. data collected for one purpose should not used for another purpose without user consent), conditions and obligations. The significance of purposes, conditions, and obligations
originates from OECD Guidelines [19] on the Protection of Privacy and Transborder Flows of Personal Data, current privacy laws in the United States, and public privacy policies of some well know organizations. The OECD guidelines are, to the best of our knowledge, the most well
known set of private information protection principles, on which many other guidelines, data-protection laws, and public privacy policies are based. Purposes are directly applied in the OECD Data Quality Principle, Purpose Specification Principle, and Use Limitation Principle. Purposes are also widely used for specifying privacy rules in legislative acts and actual public policies. HIPPA[25] rules clearly state purposes. The majority of public privacy documents posted at well known sites also specify purposes.

[Page 41]


ACM Digital Library Article (Member Access Only)

Security: Role-based access control (RBAC) fundamentals

An Extended RBAC Profile of XACML
Diala Abi Haidar 1,2, Nora Cuppens-Boulahia 1, Frederic Cuppens 1, Herve Debar 2
1 ENST Bretagne, 2 rue de la Chˆataigneraie, 35512 Cesson-S´evign´e Cedex, France
2 France Telecom R&D Caen, 42 rue des Coutures BP 6243, 14066 Caen, France


SWS’06, Novem ber 3, 2006, Alex andria, Virginia, USA.
Copyright 2006 ACM 1-59593-546-0/06/0011...$5.00.


The basic concept of the RBAC model is that users are assigned to roles, permissions are assigned to roles and users acquire permissions by being members of roles. The user-role
assignment can be a many-to-many relation in the sense that a user can be assigned to many roles and a role can have many users. Similarly, the permission-role assignment is also a many-to-many relation. The RBAC model is organized in four levels [24] each including the requirements of the basic RBAC: the flat (or core) RBAC, the hierarchical RBAC that adds requirements for supporting role hierarchies and the constrained RBAC that adds constraints on the hierarchical RBAC. The constraints may be associated with the user-role assignment (for static separation of duty) or with the activation of roles within user sessions (for dynamic separation of duty). The last level is the symmetric RBAC (also called consolidated) that adds a requirement for permission-role review. This is essential in any authorization management to identify and review the permissions assignment, i.e. the relation between permissions and roles.


The main benefit of this model is the ease of administration of security policies and its scalability. When a user moves inside an organization and has another function, the only thing the administrator needs to do is to revoke the existing user-role assignment and assign her a new role. There is no need to revoke the authorizations she had before and she will be granted new authorizations assigned to her new role. Adding to that, the role hierarchy defined in this model, where a given role can include all the permissions of another role, is a way of having a well structured access control that is the mirror of the organization structure. Finally the RBAC model supports the delegation of access permissions between roles. A role can delegate its role or
part of its role to another role [12].
[pages 14-15]

ACM Digital Library Article (Member Access Only)

Security: Role-based access control (RBAC) defined

Role-based access control
From Wikipedia, the free encyclopedia
(Redirected from
RBAC)
Jump to:
navigation, search

In computer systems security, role-based access control (RBAC) [1] [2] is an approach to restricting system access to authorized users. It is a newer alternative approach to mandatory access control (MAC) and discretionary access control (DAC).
RBAC is a policy neutral and flexible access control technology sufficiently powerful to simulate Discretionary Access Control (DAC)
[3] and Mandatory Access Control (MAC). [4]
Prior to the development of RBAC, MAC and DAC were considered to be the only known models for access control: if a model was not MAC, it was considered to be a DAC model, and vice versa. Research in the late '90s demonstrated that RBAC falls in neither category.[citation needed]

Within an organization,
roles are created for various job functions. The permissions to perform certain operations ('permissions') are assigned to specific roles. Members of staff (or other system users) are assigned particular roles, and through those role assignments acquire the permissions to perform particular system functions. Unlike context-based access control (CBAC), RBAC does not look at the message context (such as where the connection was started from).

Since users are not assigned permissions directly, but only acquire them through their role (or roles), management of individual user rights becomes a matter of simply assigning the appropriate roles to the user, which simplifies common operations such as adding a user, or changing a user's department.

RBAC differs from
access control lists (ACLs) used in traditional discretionary access control systems in that it assigns permissions to specific operations with meaning in the organization, rather than to low level data objects. For example, an access control list could be used to grant or deny write access to a particular system file, but it would not say in what ways that file could be changed. In an RBAC-based system an operation might be to create a 'credit account' transaction in a financial application or to populate a 'blood sugar level test' record in a medical application. The assignment of permission to perform a particular operation is meaningful, because the operations are fine grained and themselves have meaning within the application.
http://en.wikipedia.org/wiki/RBAC

Research: From Palmtops to Brain Cells

From Palmtops to Brain Cells
Economist Technology Quarterly (03/08) Vol. 386, No. 8570, P. 31

Palm Pilot creator and Numenta founder Jeff Hawkins aspires to make computers work in a manner that more closely resembles the human brain through his theory of hierarchical temporary memory, which posits that the brain processes information using hierarchically organized pattern-recognition "nodes." Frequently-observed patterns are identified and learned over time by nodes at each hierarchical level, and when an established pattern triggers a node, it sends a signal to the next level up in the hierarchy. As multiple signals ascend the hierarchy, nodes at higher levels learn to recognize and anticipate more sophisticated patterns, and predictions are passed down the hierarchy so that disparities between predicted and observed patterns can be identified. The Numenta Platform for Intelligent Computing is an expression of Hawkins' model in software, and Hawkins hopes the free toolkit will be applied toward the development of software that functions more like the human brain. Such software could find use in a diverse array of fields that includes robotics, video games, data analysis, and computer vision. New York University computer scientist Yann LeCun says enthusiasm for the creation of intelligent machines has waned among the machine-learning community over the past decade, and Hawkins' work is rekindling interest in the concept among younger researchers. Although he admires Hawkins' intuition, University of Toronto professor Geoffrey Hinton thinks Hawkins is underestimating the inherent difficulty of creating algorithms capable of mimicking intelligence.
Click Here to View Full Article

Blog Archive