Showing posts with label development. Show all posts
Showing posts with label development. Show all posts

Wednesday, April 18, 2012

Blog: New Julia Language Seeks to Be the C for Scientists

New Julia Language Seeks to Be the C for Scientists
InfoWorld (04/18/12) Paul Krill

Massachusetts Institute of Technology (MIT) researchers have developed Julia, a programming language designed for building technical applications. Julia already has been used for image analysis and linear algebra research. MIT developer Stefan Karpinski notes that Julia is a dynamic language, which he says makes it easier to program because it has a very simple programming model. "One of our goals explicitly is to have sufficiently good performance in Julia that you'd never have to drop down into C," Karpinski adds. Julia also is designed for cloud computing and parallelism, according to the Julia Web page. The programming language provides a simpler model for building large parallel applications via a global distributed address space, Karpinski says. Julia also could be good at handling predictive analysis, modeling problems, and graph analysis problems. "Julia's LLVM-based just-in-time compiler, combined with the language's design, allows it to approach and often match the performance of C and C++," according to the Julia Web page.

Monday, April 16, 2012

Blog: Fast Data hits the Big Data fast lane

Fast Data hits the Big Data fast lane

By Andrew Brust | April 16, 2012, 6:00am PDT
Summary: Fast Data, used in large enterprises for highly specialized needs, has become more affordable and available to the mainstream. Just when corporations absolutely need it.
This guest post comes courtesy of Tony Baer’s OnStrategies blog. Tony is a principal analyst at Ovum.

By Tony Baer

Of the 3 “V’s” of Big Data – volume, variety, velocity (we’d add “Value” as the 4th V) – velocity has been the unsung ‘V.’ With the spotlight on Hadoop, the popular image of Big Data is large petabyte data stores of unstructured data (which are the first two V’s). While Big Data has been thought of as large stores of data at rest, it can also be about data in motion.
“Fast Data” refers to processes that require lower latencies than would otherwise be possible with optimized disk-based storage. Fast Data is not a single technology, but a spectrum of approaches that process data that might or might not be stored. It could encompass event processing, in-memory databases, or hybrid data stores that optimize cache with disk.

Tuesday, April 3, 2012

Blog: Programming Computers to Help Computer Programmers

Programming Computers to Help Computer Programmers
Rice University (04/03/12) Jade Boyd

Computer scientists from Rice University will participate in a project to create intelligent software agents that help people write code faster and with fewer errors. The Rice team will focus on robotic applications and how to verify that synthetic, computer-generated code is safe and effective, as part of the effort to develop automated program-synthesis tools for a variety of uses. "Programming is now done by experts only, and this needs to change if we are to use robots as helpers for humans," says Rice professor Lydia Kavraki. She also stresses that safety is critical. "You can only have robots help humans in a task--any task, whether mundane, dangerous, precise, or expensive--if you can guarantee that the behavior of the robot is going to be the expected one." The U.S. National Science Foundation is providing a $10 million grant to fund the five-year initiative, which is based at the University of Pennsylvania. Computer scientists at Rice and Penn have proposed a grand challenge robotic scenario of providing hospital staff with an automated program-synthesis tool for programming mobile robots to go from room to room, turn off lights, distribute medications, and remove medical waste.

Wednesday, March 28, 2012

Blog: Google Launches Go Programming Language 1.0

Google Launches Go Programming Language 1.0
eWeek (03/28/12) Darryl K. Taft

Google has released version 1.0 of its Go programming language, which was initially introduced as an experimental language in 2009. Google has described Go as an attempt to combine the development speed of working in a dynamic language such as Python with the performance and safety of a compiled language such as C or C++. "We're announcing Go version 1, or Go 1 for short, which defines a language and a set of core libraries to provide a stable foundation for creating reliable products, projects, and publications," says Google's Andrew Gerrand. He notes that Go 1 is the first release of Go that is available in supported binary distribution, identifying Linux, FreeBSD, Mac OS X, and Windows. Stability for users was the driving motivation for Go 1, and much of the work needed to bring programs up to the Go 1 standard can be automated with the go fix tool. A complete list of changes to the language and the standard library, documented in the Go 1 release notes, will be an essential reference for programmers who are migrating code from earlier versions of Go. There also is a new release of the Google App Engine SDK.

Tuesday, March 27, 2012

Blog: Google Working on Advanced Web Engineering

Google Working on Advanced Web Engineering
InfoWorld (03/27/12) Joab Jackson

Google is developing several advanced programming technologies to ease complex Web application development. "We're getting to the place where the Web is turning into a runtime integration platform for real components," says Google researcher Alex Russell. He says one major shortcoming of the Web is that technologies do not have a common component model, which slows code testing and reuse. Google wants to introduce low-level control elements without making the Web stack more confusing for novices. Google's efforts include creating a unified component model, adding classes to JavaScript, and creating a new language for Web applications. By developing a unified component model for Web technologies, Google is setting the stage for developers to "create new instances of an element and do things with it," Russell says. Google engineers also are developing a proposal to add classes to the next version of JavaScript. "We're getting to the place where we're adding shared language for things we're already doing in the platform itself," Russell says. Google also is developing a new language called Dart, which aims to provide an easy way to create small Web applications while providing the support for large, complex applications as well, says Google's Dan Rubel.

Thursday, March 15, 2012

Blog: 'Big Data' Emerges as Key Theme at South by Southwest Interactive

‘Big Data' Emerges as Key Theme at South by Southwest Interactive
Chronicle of Higher Education (03/15/12) Jeffrey R. Young

Several panels and speakers at this year's South By Southwest Interactive festival discussed the growing ability to use data-mining techniques to analyze big data to shape political campaigns, advertising, and education. For example, panelist and Microsoft researcher Jaron Lannier says companies that rely on selling information about their users' behavior to advertisers should find a way to compensate people for their posts. A panel on education discussed the potential ability of Twitter and Facebook to better connect with students and detect signs that that students might be struggling with certain subjects. "We need to be looking at engagement in this new spectrum, and we haven't," says South Dakota State University social-media researcher Greg Heiberger. Some panels examined the role of big data in the latest presidential campaigns. Although recent presidential campaigns have focused on demographic subgroups, future campaigns may design their messages even more narrowly. "They’re actually going to try targeting groups of individuals so that political campaigns become about data mining" rather than any kind of broad policy message, says University of Texas at Dallas professor David Parry.

Wednesday, March 14, 2012

Blog: Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work

Hopkins Researchers Aim to Uncover Which Mobile Health Applications Work
Baltimore Sun (03/14/12) Meredith Cohn

Johns Hopkins University has 49 mobile health studies underway around the world as part of its Global mHealth Initiative. The initiative aims to evaluate which mobile strategies can aid doctors, community health workers, and consumers in ways equal to traditional methods. Pew Internet & American Life Project's Susannah Fox notes that more than 80 percent of Internet users have looked online for health information. Many of the 40,000 applications already available have practical purposes, such as helping patients adhere to drug regimens, helping people change harmful behaviors, and aiding in weight loss through texts about specific goals and behaviors. There also are pill bottles that send text messages when a person forgets to take their medicine. Meanwhile, mHealth researchers have developed software to help educate medical students, doctors, and other workers about how to care for burn victims. The researchers also have developed apps to train health workers caring for those with HIV and AIDS and to screen and support victims of domestic abuse. "What they all have in common is they increase how often individuals think about their health," says mHealth director Alain B. Labrique. "There is evidence that suggests some apps can have an impact."

Tuesday, March 6, 2012

Blog: W3C CEO Calls HTML5 as Transformative as Early Web

W3C CEO Calls HTML5 as Transformative as Early Web
Computerworld Canada (03/06/12) Shane Schick

World Wide Web Consortium CEO Jeff Jaffe says HTML5 will be among the most disruptive elements to hit organizations since the early days of the Internet. "We’re about to experience a generational change in Web technology, and just as the Web transformed every business, [HTML5] will lead to another transformation," Jaffe says. HTML5 features cross-browser capability, improved data integration, and a better way of handling video. Jaffe says HTML5 makes Web pages "more beautiful [and] intelligent," and also provides for improved accessibility for disabled users. “It won’t really be a standard until 2014, but in the Web ecosystem, nobody waits,” he says. “They’ll make minor adjustments once the standard is done.” For example, TeamLab recently launched the TeamLab Document Editor, an online word processing program. Document Editor uses Canvas, a part of HTML5 that allows for dynamic, scriptable rendering of two-dimensional shapes and bitmap images. Jaffe says HTML5 could benefit a range of industries, including retail, air travel, and the automotive industry.

Wednesday, December 28, 2011

Blog: Five Open Source Technologies for 2012

Five Open Source Technologies for 2012
IDG News Service (12/28/11) Joab Jackson

Five open source projects could become the basis for new businesses and industries in 2012. Nginx, a Web server program, could become popular due to its ability to easily handle high-volume traffic. Nginx already is used on highly trafficked Web sites, and the next release, due in 2012, will be more pliable for shared hosting environments. The OpenStack cloud computing platform has gained support from several technology firms due to its scalability. "We're not talking about [using OpenStack to run a] cloud of 100 servers or even 1,000 servers, but tens of thousands of servers," says OpenStack Project Policy Board's Jonathan Bryce. Stig was designed for the unique workloads of social networking sites, according to its developers. The data store's architecture allows for inferential searching, enabling users and applications to look for connections between disparate pieces of information. Linux Mint was designed specifically for users who want a desktop operating system and do not want to learn more about how Linux works. The Linux Mint project is now the fourth most popular desktop operating system in the world. GlusterFS is one of the fastest growing storage software systems on the market, as downloads have increased by 300 percent in the last year.

Tuesday, December 6, 2011

Blog: System Would Monitor Feds for Signs They're 'Breaking Bad'

System Would Monitor Feds for Signs They're 'Breaking Bad'
Government Computer News (12/06/11) Kevin McCaney

Georgia Tech researchers, in collaboration with researchers at Oregon State University, the University of Massachusetts, and Carnegie Mellon University, are developing the Proactive Discovery of Insider Threats Using Graph Analysis and Learning (PRODIGAL) system. PRODIGAL is designed to scan up to 250 million text messages, emails, and file transfers to identify insider threats or employees that are about to turn against the organization. The system will integrate graph processing, anomaly detection, and relational machine learning to create a prototype Anomaly Detection at Multiple Scales system. PRODIGAL, which initially would be used to monitor the communications in civilian, government, and military organizations in which employees have agreed to be monitored, is intended to identify rogue individuals, according to the researchers. "Our goal is to develop a system that will provide analysts for the first time a very short, ranked list of unexplained events that should be further investigated," says Georgia Tech professor David Bader.

Friday, November 11, 2011

Blog: HTML5: A Look Behind the Technology Changing the Web

HTML5: A Look Behind the Technology Changing the Web
Wall Street Journal (11/11/11) Don Clark

HTML5 is catching on as the online community embraces it. The programming standard allows data to be stored on a user's computer or mobile device so that Web apps can function without an Internet link. HTML5 also enables Web pages to boast jazzier images and effects, while objects can move on Web pages and respond to cursor movements. Audio is played without a plug-in on HTML5, and interactive three-dimensional effects can be created using a computer's graphics processor via WebGL technology. In addition, video can be embedded in a Web page without a plug-in, and interactive games can operate with just a Web browser without installing other software or plug-ins. Silicon Valley investor Roger McNamee projects that HTML5 will enable artists, media firms, and advertisers to differentiate their Web offerings in ways that were previously impractical. Binvisions.com reports that about one-third of the 100 most popular Web sites used HTML5 in the quarter that ended in September. Google, Microsoft, the Mozilla Foundation, and Opera Software are adding momentum to HTML5 by building support for the standard into their latest Web browsers.

Monday, October 10, 2011

Blog: Google Launches Dart as a JavaScript Killer

Google Launches Dart as a JavaScript Killer
IDG News Service (10/10/11) Joab Jackson

Google announced the launch of a preview version of Dart, an object-oriented Web programming language that has capabilities that resemble those of JavaScript but also addresses some of its scalability and organizational shortcomings. Google software engineer Lars Bak describes Dart as "a structured yet flexible language for Web programming." Dart is designed to be used for both quickly cobbling together small projects as well as for developing larger-scale Web applications. Programmers will be able to add variables without defining their data type or to define their data types. A compiler and a virtual machine, along with a set of basic libraries, are part of the preview version. Initially, programmers will have to compile their Dart creations to JavaScript, using a tool included in the Dart package, to get them to run on browsers. However, Google would like future Web browser to include a native Dart virtual machine for running Dart programs.

Thursday, September 15, 2011

Blog: Intel Code Lights Road to Many-Core Future

Intel Code Lights Road to Many-Core Future
EE Times (09/15/11) Rick Merritt

Intel's release of open source code for a data-parallel version of Javascript seeks to help mainstream programmers who use scripting languages tap the power of multicore processors. Intel's Justin Rattner says in an interview that there will be multiple programming models, and Parallel JS encompasses one such model. The language enhances performance for data-intensive, browser-based apps such as photo and video editing and three-dimensional gaming running on Intel chips. Rattner describes Parallel JS as "a pretty important step that gets us beyond the prevailing view that once you are beyond a few cores, multicore chips are only for technical apps." A later iteration of Parallel JS also will exploit the graphics cores currently incorporated into Intel's latest processors. In addition, Intel is working on ways to enhance modern data-parallel tools that operate general-purpose programs on graphics processors, and those tools could be issued next year, Rattner says. He notes that beyond that, data-parallel methods require a more basic change to boost power efficiency by becoming more asynchronous.

Friday, September 9, 2011

Blog: Google to Unveil 'Dart' Programming Language

Google to Unveil 'Dart' Programming Language
eWeek (09/09/11) Darryl K. Taft

Google plans to introduce a new programming language called Dart at the upcoming Goto conference. Dart is described as a structured Web programming language, and Google engineers Lars Bak and Gilad Bracha are scheduled to present it at Goto, which takes place Oct. 10-12 in Aarhus, Denmark. Bracha is the creator of the Newspeak programming language, co-author of the Java Language Specification, and a researcher in the area of object-oriented programming languages. Bak has designed and implemented object-oriented virtual machines, and has worked on Beta, Self, Strongtalk, Sun's HotSpot, OOVM Smalltalk, and Google's V8 engine for the Chrome browser. In 2009, Google introduced the experimental language Go in an attempt to combine the development speed of working in a dynamic language, such as Python, with the performance and safety of a compiled language such as C or C++.

Tuesday, July 5, 2011

Blog: A Futures Market for Computer Security

A Futures Market for Computer Security
Technology Review (07/05/11) Brian Krebs

A pilot prediction market that can forecast major information security incidents before they occur is under development by information security researchers from academia, industry, and the U.S. intelligence community for the purpose of supplying actionable data, says Greg Shannon with Carnegie Mellon University's Software Engineering Institute. "If you're Verizon, and you're trying to pre-position resources, you might want to have some visibility over the horizon about the projected prevalence of mobile malware," he says. "That's something they'd like to have an informed opinion about by leveraging the wisdom of the security community." Consensus Point CEO Linda Rebrovick says the project's objective is to draw a network of approximately 250 experts. Prediction markets have a substantial inherent bias--respondents to questions are not surveyed randomly—but there also is an incentive for respondents to respond only to those queries they feel confident in answering accurately. "People tend to speak up only when they're reasonably sure they know the answer," says Consensus Point chief scientist Robin Hanson. Even lukewarm responses to questions can be useful, notes Dan Geer, chief information security officer at the U.S. Central Intelligence Agency's In-Q-Tel venture capital branch.

View Full Article

Thursday, June 30, 2011

Blog: UCL Researchers Develop 'Darwinian' Software to Test Car Computers

UCL Researchers Develop 'Darwinian' Software to Test Car Computers
Science Business (06/30/11)

Researchers from University College London's Center for Research on Evolution, Search, and Testing are collaborating with a team from Berner & Mattner on improving search-based testing techniques. The researchers are using Darwinian evolution to breed scenarios for testing automotive software. The car-testing scenarios then will compete with each other in a virtual world. The idea is to breed the strongest and most demanding testing scenarios for the task of reading through the up to 10 million lines of software code that modern cars now contain to make sure everything in the vehicles work correctly. "Search-based testing techniques have the potential to fully automate testing of embedded systems," says Berner & Mattner's Joachim Wegener. "This will allow significant cost savings and increased product quality." Further development of the evolutionary testing techniques would enable its use in industrial practice.

View Full Article

Thursday, June 23, 2011

Blog; CERN Experiments Generating One Petabyte of Data Every Second

CERN Experiments Generating One Petabyte of Data Every Second
V3.co.uk (06/23/11) Dan Worth

CERN researchers generate a petabyte of data every second as they work to discover the origins of the universe by smashing particles together at close to the speed of light. However, the researchers, led by Francois Briard, only store about 25 petabytes every year because they use filters to save just the results they are interested in. "To analyze this amount of data you need the equivalent of 100,000 of the world's fastest PC processors," says CERN's Jean-Michel Jouanigot. "CERN provides around 20 percent of this capability in our data centers, but it's not enough to handle this data." The researchers worked with the European Commission to develop the Grid, which provides access to computing resources from around the world. CERN receives data center use from 11 different providers on the Grid, including from companies in the United States, Canada, Italy, France, and Britain. The data comes from four machines on the Large Hadron Collider in which the particle collisions are monitored, which transmit data at 320 Mbps, 100 Mbps, 220 Mbps, and 500 Mbps, respectively, to the CERN computer center.

View Full Article

Tuesday, June 21, 2011

Blog: Carnegie Mellon Methods Keep Bugs Out of Software for Self-Driving Cars

Carnegie Mellon Methods Keep Bugs Out of Software for Self-Driving Cars
Carnegie Mellon University (06/21/11) Byron Spice

Carnegie Mellon University (CMU) researchers have developed a method to verify the safety of driver assistance technologies, such as adaptive cruise control and automatic braking. The researchers developed a model of a car-control system in which computers and sensors in each car combine to control acceleration, braking, and lane changes, and used mathematical algorithms to formally verify that the system would keep cars from crashing into each other. "The system we created is in many ways one of the most complicated cyber-physical systems that has ever been fully verified formally," says CMU professor Andre Platzer. The safety verification systems must take into account both physical laws and the capabilities of the system's hardware and software. The researchers showed that they could verify the safety of their adaptive cruise control system by breaking the problem into modular pieces and organizing the pieces in a hierarchy. Platzer says that automated driving systems have the potential to save many lives and billions of dollars by preventing accidents, but developers must be certain that they are safe. "The dynamics of these systems have been beyond the scope of previous formal verification techniques, but we've had success with a modular approach to detecting design errors in them," he says.

View Full Article

Thursday, March 31, 2011

Blog: New Tool Makes Programs More Efficient Without Sacrificing Safety Functions

New Tool Makes Programs More Efficient Without Sacrificing Safety Functions
NCSU News (03/31/11) Matt Shipman

North Carolina State University (NCSU) researchers have developed software that helps programs run more efficiently on multicore chips without sacrificing safety features. The tool will help programmers who might otherwise leave out safety features because of how much they slow down a program's functions, says NCSU professor James Tuck. "Leaving out those features can mean that you don't identify a problem as soon as you could or should, which can be important--particularly if it's a problem that puts your system at risk from attack," Tuck says. In the past, the safety features were embedded directly into a program's code and run through the same core, which slows the program down. The researchers' tool utilizes multicore chips by running the safety features on a separate core in the same chip, which enables the main program to run at close-to-normal operating speed. "Utilizing our software tool, we were able to incorporate safety metafunctions, while only slowing the program down by approximately 25 percent," Tuck says.

View Full Article

Friday, March 4, 2011

Blog: Armies of Expensive Lawyers, Replaced by Cheaper Software [the Enron Corpus]

Armies of Expensive Lawyers, Replaced by Cheaper Software
New York Times (03/04/11) John Markoff

The automation of high-level jobs is getting more frequent due to progress in computer science and linguistics. Recent advances in artificial intelligence have enabled software to inexpensively analyze documents in a fraction of the time that it used to take highly trained experts to complete the same work. Some programs can even extract relevant concepts, specific terms, and identify patterns in huge volumes of information. "We're at the beginning of a 10-year period where we're going to transition from computers that can't understand language to a point where computers can understand quite a bit about language," says Carnegie Mellon University's Tom Mitchell. The most basic linguistic approaches use specific search words to find and sort relevant documents, while more sophisticated programs filter documents through large word and phrase definition webs. For example, one company has developed software designed to visualize a chain of events and search for digital anomalies. Meanwhile, another company has developed software that uses language analysis to study documents and find concepts instead of key words. These tools and others are based on Enron Corpus, an email database that includes more than five million messages from the Enron prosecution and was made public for scientific and technological research by University of Massachusetts, Amherst computer scientist Andrew McCallum.

View Full Article

Blog Archive