Showing posts with label complexity. Show all posts
Showing posts with label complexity. Show all posts

Wednesday, April 25, 2012

Blog: Algorithmic Incentives

Algorithmic Incentives
MIT News (04/25/12) Larry Hardesty

Massachusetts Institute of Technology (MIT) professor Silvio Micali and graduate student Pablo Azar have developed a type of mathematical game called a rational proof, which varies interactive proofs by giving them an economic component. Rational proofs could have implications for cryptography, but they also could suggest new ways to structure incentives in contracts. Research on both interactive proofs and rational proofs falls under the designation of computational-complexity theory, which classifies computational problems according to how hard they are to solve. Although interactive proofs take millions of rounds of questioning, rational proofs enable researchers to establish one round of questioning. With rational proofs, "we have yet another twist, where, if you assign some game-theoretical rationality to the prover, then the proof is yet another thing that we didn't think of in the past," says Weizmann Institute of Science professor Moni Naor. Rational-proof systems that describe simple interactions also could have applications in crowdsourcing, Micali says. He notes that research on rational proofs is just getting started. “Right now, we’ve developed it for problems that are very, very hard," Micali says. "But how about problems that are very, very simple?”

Wednesday, August 10, 2011

Blog: How Computational Complexity Will Revolutionize Philosophy

How Computational Complexity Will Revolutionize Philosophy
Technology Review (08/10/11)

Massachusetts Institute of Technology computer scientist Scott Aaronson argues that computational complexity theory will have a transformative effect on philosophical thinking about a broad spectrum of topics such as the challenge of artificial intelligence (AI). The theory focuses on how the resources required to solve a problem scale with some measure of the problem size, and how problems typically scale either reasonably slowly or unreasonably rapidly. Aaronson raises the issue of AI and whether computers can ever become capable of human-like thinking. He contends that computability theory cannot provide a fundamental impediment to computers passing the Turing test. A more productive strategy is to consider the problem's computational complexity, Aaronson says. He cites the possibility of a computer that records all the human-to-human conversations it hears, accruing a database over time with which it can make conversation by looking up human answers to questions it is presented with. Aaronson says that although this strategy works, it demands computational resources that expand exponentially with the length of the conversation. This, in turn, leads to a new way of thinking about the AI problem, and by this reasoning, the difference between humans and machines is basically one of computational complexity.

View Full Article

Thursday, October 8, 2009

Blog: Prizes Aside, the P-NP Puzzler Has Consequences

Prizes Aside, the P-NP Puzzler Has Consequences
New York Times (10/08/09) Markoff, John

The frenzy of downloading that accompanied the September cover article in the Communications of the ACM when it was issued online reflects the intense interest in the subject of the article--the progress, or lack thereof, on solving the P vs. NP challenge. P represents the class of problems that can be solved in polynomial time, while NP represents the class of problems that can be confirmed in polynomial time. It is theorized that if P equals NP, some of the most complex real-world computing problems, such as optimizing the layout of transistors on a computer chip, could be addressed, triggering an acceleration of technological and economic productivity. Cracking computer codes raises a similar challenge. Such tasks share the common characteristic that an increase in the size of a problem is accompanied by an exponential increase in the computer time needed to solve it. Progress on meeting the P vs. NP challenge has been slow, but Lance Fortnow of the McCormick School of Engineering at Northwestern University believes the theory might be proved or disproved through algebraic geometry.

View Full Article

Thursday, May 14, 2009

Blog: Constantinos Daskalakis Wins ACM Award for Advances in Analyzing Behavior in Conflict Situations; "The Complexity of Nash Equilibria"

Constantinos Daskalakis Wins ACM Award for Advances in Analyzing Behavior in Conflict Situations
Association for Computing Machinery (05/14/09) Gold, Virginia

ACM has awarded Constantinos Daskalakis the 2008 Doctoral Dissertation Award for advancing the understanding of behavior in complex networks of interacting individuals, such as those created by the Internet. Daskalakis's dissertation, "The Complexity of Nash Equilibria," provides a novel, algorithmic perspective on Game Theory and the Nash equilibrium. Daskalakis, a postdoctoral researcher at Microsoft Research New England, was nominated by the University of California, Berkeley, and will receive the award at the annual ACM Awards Banquet on June 27 in San Diego. Daskalakis's dissertation explores whether rational, self-interested individuals can arrive, through their interactions, at a state in which no single one of them would be better off switching strategies unless others switched as well. This state is called a Nash equilibrium, and is traditionally used in Game Theory as a way of predicting the behavior of people in conflict situations. Daskalakis demonstrated that in complex systems the Nash equilibrium is computationally unachievable in some cases, suggesting the Nash equilibrium may not be an accurate prediction of behavior in all situations and emphasizing the need for new, computationally meaningful methods for modeling strategic behavior in complex systems, such as those found in financial markets, online systems, and social networks.

View Full Article

Monday, February 16, 2009

Blog: The Computer as a Road Map to Unknowable Territory; modeling complex behavior

The Computer as a Road Map to Unknowable Territory
Washington Post (02/16/09) P. A7; Vedantam, Shankar

Scientist Yaneer Bar-Yam has developed a computational model of the economy that uses virtual actors to populate the world, instead of digital representations of specific individuals, companies, and brokers, enabling researchers to change how the actors behave and study how those changes affect the economic ecosystem. Bar-Yam says the principle behind the model is that humans regularly solve problems by imaging how certain behaviors will affect specific outcomes, but in a complex system such as the economy, which can be affected by fear, rumors, and misinformation, the ability to forecast accurately is severely reduced. He wanted to understand why the economy was so turbulent, and his model provides a unique explanation for the instability. In July 2007, the Bush administration eliminated a 69-year-old regulation known as the uptick rule. The rule was designed to prevent bear raids, which is when a powerful investor suddenly sells a large number of shares in a company, creating a temporary situation in which supply is greater than demand, causing prices to fall and allowing the investor to buy back shares at a lower price. Bar-Yam's model suggests that the elimination of the uptick rule created instability in the same way removing a support from a house would, which allowed the housing crisis to cripple the economy. The model, which was created at Bar-Yam's New England Complex Systems Institute, is just one of many computational models that have recently been developed to obtain a more thorough understanding of complex systems. Other models include two University of Maryland models, one used to predict how different situations could amplify the likelihood of violence in the Middle East, and one that shows that infant mortality levels predict the likelihood of political instability in a country better than any other single measurement.

Tuesday, December 16, 2008

Blog: Dartmouth Researchers Develop Computational Tool to Untangle Complex Data

Dartmouth Researchers Develop Computational Tool to Untangle Complex Data
Dartmouth News (12/16/08) Knapp, Susan

Dartmouth College researchers have developed the Partition Decoupling Method (PDM), a mathematical tool that can be used to untangle the underlying structure of time-dependent, interrelated, complex data. "With respect to the equities market, we created a map that illustrated a generalized notion of sector and industry, as well as the interactions between them, reflecting the different levels of capital flow, among and between companies, industries, sectors, and so forth," says Dartmouth professor Daniel Rockmore, who led the development effort. "In fact, it is this idea of flow, be it capital, oxygenated blood, or political orientation, that we are capturing." Capturing flow patterns is critical to understanding the subtle interdependencies found in the different components of complex systems. Analysis of the network of correlations is done using spectral analysis. The analysis is combined with statistical learning tools to uncover regions where the flow circulates more than would be expected at random. The result creates a detailed analysis of the interrelations and provides a wide view of the coarse-scale flow as a whole. Rockmore says PDM uses a different approach than similar programs designed to find how complex systems behave, and because it is not strictly hierarchical, PDM does not constrain interconnectivity.

View Full Article

Monday, November 17, 2008

Blog: 'Six Degrees of Kevin Bacon' Game Provides Clue to Efficiency of Complex Networks

'Six Degrees of Kevin Bacon' Game Provides Clue to Efficiency of Complex Networks
University of California, San Diego (11/17/08) Froelich, Warren; Zverina, Jan

The small-world paradigm, discovered by sociologist Stanley Milgram in the 1960s and popularized by the "Six Degrees of Kevin Bacon" game, has become a source of inspiration for researchers studying the Internet as a global complex network. A study published in Nature Physics reveals a previously unknown mathematical model called "hidden metric space," which could explain the small-world phenomenon and its relationship to both man-made and natural networks such as human language, as well as gene regulation or neural networks that connect neurons to organs and muscles within our bodies. The concept of an underlying hidden space also may be of interest to researchers working to remove mounting bottlenecks within the Internet that threaten the smooth passage of digital information. "Internet experts are worried that the existing Internet routing architecture may not sustain even another decade," says Dmitri Krioukov, a researcher at the Cooperative Association for Internet Data Analysis (CAIDA), based at the University of California, San Diego (UCSD). CAIDA director and UCSD professor Kimberly Claffy says the discovery of a metric space hidden beneath the Internet could point toward architectural innovations that would remove this bottleneck, and Krioukov says the reconstruction of hidden metric spaces underlying a variety of real complex networks could have other practical applications. For example, hidden spaces in social or communications networks could lead to new, efficient strategies for searching for specific content or individuals.

View Full Article

Blog: Burned Once, Intel Prepares New Chip Fortified by Constant Tests (formal methods)

Burned Once, Intel Prepares New Chip Fortified by Constant Tests
The New York Times (11/17/08) P. B3; Markoff, John

Despite rigorous stress testing on dozens of computers, Intel's John Barton is still nervous about the upcoming release of Intel's new Core i7 microprocessor. Even after months of testing, Barton knows that it is impossible to predict exactly how the chip will function once it is installed in thousands of computers running tens of thousands of programs. The new chip, which has 731 million transistors, was designed for use in desktop computers, but the company hopes that it will eventually be used in everything from powerful servers to laptops. The design and testing of an advanced microprocessor is one of the most complex endeavors humans have undertaken. Intel now spends $500 million annually to test its chips before selling them. However, it still is impossible to test more than a fraction of the total number of "states" that the new Core i7 chip can be programmed in. "Now we are hitting systemic complexity," says Synopsys CEO Aart de Geus. "Things that came from different angles that used to be independent have become interdependent." In an effort to produce error-free chips, Intel in the 1990s turned to a group of mathematical theoreticians in the computer science field who had developed advanced techniques for evaluating hardware and software, known as formal methods. In another effort to minimize chip errors, the Core i7 also contains software that can be changed after the microprocessors are shipped, giving Intel the ability to correct flaws after the product's release.

View Full Article

Blog Archive