fbpx
Wikipedia

Randomized algorithms as zero-sum games

Randomized algorithms are algorithms that employ a degree of randomness as part of their logic. These algorithms can be used to give good average-case results (complexity-wise) to problems which are hard to solve deterministically, or display poor worst-case complexity. An algorithmic game theoretic approach can help explain why in the average case randomized algorithms may work better than deterministic algorithms.

Formalizing the game edit

Consider a zero-sum game between player A, whose strategies are deterministic algorithms, and player B, whose strategies are inputs for A's algorithms. The cost of a strategy profile is the running time of A's chosen algorithm on B's chosen input. Therefore, player A tries to minimize the cost, and player B tries to maximize it. In the world of pure strategies, for every algorithm that A chooses, B may choose the most costly input – this is the worst-case scenario, and can be found using standard complexity analysis.

But in the real world, inputs are normally not selected by an ‘evil opponent’ – rather, they come from some distribution over inputs. Since this is the case, if we allow the algorithms to also be drawn from some distribution, we may look at the game as one that allows mixed strategies. That is, each player chooses a distribution over its strategies.

Analysis edit

Incorporating mixed strategies into the game allows us to use von Neumann's minimax theorem:

 

where R is a distribution over the algorithms, D is a distribution over inputs, A is a single deterministic algorithm, and T(AD) is the average running time of algorithm a on input D. More specifically:

 

If we limit the set of algorithms to a specific family (for instance, all deterministic choices for pivots in the quick sort algorithm), choosing an algorithm A from R is equivalent to running a randomized algorithm (for instance, running quick sort and randomly choosing the pivots at each step).

This gives us an insight on Yao's principle, which states that the expected cost of any randomized algorithm for solving a given problem, on the worst-case input for that algorithm, can be no better than the expected cost, for a worst-case random probability distribution on the inputs, of the deterministic algorithm that performs best against that distribution.

randomized, algorithms, zero, games, this, article, does, cite, sources, please, help, improve, this, article, adding, citations, reliable, sources, unsourced, material, challenged, removed, find, sources, news, newspapers, books, scholar, jstor, january, 2010. This article does not cite any sources Please help improve this article by adding citations to reliable sources Unsourced material may be challenged and removed Find sources Randomized algorithms as zero sum games news newspapers books scholar JSTOR January 2010 Learn how and when to remove this template message Randomized algorithms are algorithms that employ a degree of randomness as part of their logic These algorithms can be used to give good average case results complexity wise to problems which are hard to solve deterministically or display poor worst case complexity An algorithmic game theoretic approach can help explain why in the average case randomized algorithms may work better than deterministic algorithms Formalizing the game editConsider a zero sum game between player A whose strategies are deterministic algorithms and player B whose strategies are inputs for A s algorithms The cost of a strategy profile is the running time of A s chosen algorithm on B s chosen input Therefore player A tries to minimize the cost and player B tries to maximize it In the world of pure strategies for every algorithm that A chooses B may choose the most costly input this is the worst case scenario and can be found using standard complexity analysis But in the real world inputs are normally not selected by an evil opponent rather they come from some distribution over inputs Since this is the case if we allow the algorithms to also be drawn from some distribution we may look at the game as one that allows mixed strategies That is each player chooses a distribution over its strategies Analysis editIncorporating mixed strategies into the game allows us to use von Neumann s minimax theorem min R max D T A D max D min A T A D displaystyle min R max D T A D max D min A T A D nbsp where R is a distribution over the algorithms D is a distribution over inputs A is a single deterministic algorithm and T A D is the average running time of algorithm a on input D More specifically T A D E x D T A X displaystyle T A D underset x sim D operatorname E T A X nbsp If we limit the set of algorithms to a specific family for instance all deterministic choices for pivots in the quick sort algorithm choosing an algorithm A from R is equivalent to running a randomized algorithm for instance running quick sort and randomly choosing the pivots at each step This gives us an insight on Yao s principle which states that the expected cost of any randomized algorithm for solving a given problem on the worst case input for that algorithm can be no better than the expected cost for a worst case random probability distribution on the inputs of the deterministic algorithm that performs best against that distribution Retrieved from https en wikipedia org w index php title Randomized algorithms as zero sum games amp oldid 1133049339, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.