fbpx
Wikipedia

Pascal's mugging

In philosophy, Pascal's mugging is a thought experiment demonstrating a problem in expected utility maximization. A rational agent should choose actions whose outcomes, when weighed by their probability, have higher utility. But some very unlikely outcomes may have very great utilities, and these utilities can grow faster than the probability diminishes. Hence the agent should focus more on vastly improbable cases with implausibly high rewards; this leads first to counter-intuitive choices, and then to incoherence as the utility of every choice becomes unbounded.

The name refers to Pascal's Wager, but unlike the wager, it does not require infinite rewards.[1] This sidesteps many objections to the Pascal's Wager dilemma that are based on the nature of infinity.[2]

Problem statement edit

The term "Pascal's mugging" to refer to this problem was originally coined by Eliezer Yudkowsky in the LessWrong forum.[3][2] Philosopher Nick Bostrom later elaborated the thought experiment in the form of a fictional dialogue.[2] Subsequently, other authors published their own sequels to the events of this first dialogue, adopting the same literary style.[4][5]

In Bostrom's description,[2] Blaise Pascal is accosted by a mugger who has forgotten their weapon. However, the mugger proposes a deal: the philosopher gives them his wallet, and in exchange the mugger will return twice the amount of money tomorrow. Pascal declines, pointing out that it is unlikely the deal will be honoured. The mugger then continues naming higher rewards, pointing out that even if it is just one chance in 1000 that they will be honourable, it would make sense for Pascal to make a deal for a 2000 times return. Pascal responds that the probability of that high return is even lower than one in 1000. The mugger argues back that for any low but strictly greater than 0 probability of being able to pay back a large amount of money (or pure utility) there exists a finite amount that makes it rational to take the bet. In one example, the mugger succeeds by promising Pascal 1,000 quadrillion happy days of life. Convinced by the argument, Pascal gives the mugger the wallet.

In one of Yudkowsky's examples, the mugger succeeds by saying "give me five dollars, or I'll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills   people". Here, the number   uses Knuth's up-arrow notation; writing the number out in base 10 would require enormously more writing material than there are atoms in the known universe.[3]

The supposed paradox results from two inconsistent views. On the one side, by multiplying an expected utility calculation, assuming loss of five dollars to be valued at  , loss of a life to be valued at  , and probability that the mugger is telling the truth at  , the solution is to give the money if and only if  . Assuming that   is higher than  , so long as   is higher than  , which is assumed to be true,[note 1] it is considered rational to pay the mugger. On the other side of the argument, paying the mugger is intuitively irrational due to its exploitability. If the person being mugged agrees to this sequence of logic, then they can be exploited repeatedly for all of their money, resulting in a Dutch-book, which is typically considered irrational. Views on which of these arguments is logically correct differ.[3]

Moreover, in many reasonable-seeming decision systems, Pascal's mugging causes the expected utility of any action to fail to converge, as an unlimited chain of successively dire scenarios similar to Pascal's mugging would need to be factored in.[7][8]

Some of the arguments concerning this paradox affect not only the expected utility maximization theory, but may also apply to other theoretical systems, such as consequentialist ethics, for example.[note 2]

Consequences and remedies edit

Philosopher Nick Bostrom argues that Pascal's mugging, like Pascal's wager, suggests that giving a superintelligent artificial intelligence a flawed decision theory could be disastrous.[10] Pascal's mugging may also be relevant when considering low-probability, high-stakes events such as existential risk or charitable interventions with a low probability of success but extremely high rewards. Common sense seems to suggest that spending effort on too unlikely scenarios is irrational.

One advocated remedy might be to only use bounded utility functions: rewards cannot be arbitrarily large.[7][11] Another approach is to use Bayesian reasoning to (qualitatively) judge the quality of evidence and probability estimates rather than naively calculate expectations.[6] Other approaches are to penalize the prior probability of hypotheses that argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us,[note 3] reject providing the probability of a payout first,[15] or abandon quantitative decision procedures in the presence of extremely large risks.[8]

See also edit

Notes edit

  1. ^ While it may seem very intuitive that the probability   cannot be as small as  , this is not necessarily true. As a notable example, it is false in Bayesian models for this problem:[6] There, the prior probability of the mugger’s claim being correct decreases with the extraordinariness of his claim. If his claim is as extraordinary as in this case, its probability will also be extraordinarily small in a Bayesian model. Furthermore, a frequentist may estimate the probability of the mugger's threat being realized as equal to 0, since no realization of such an extraordinary threat has ever been observed.
  2. ^ A ‘consequentialist version’ of Pascal’s mugging was proposed by Bradley Monton as follows: A strange person hands you a baby and asks you to torture it, telling you that the torture will prevent significant unjust suffering of a large number of sentient creatures in some distant galaxy. While the probability is very low that torturing the baby will prevent the suffering, as long as the strange person makes the number of claimed distant-galaxy creatures high enough, according to consequentialism equipped with probabilistic models in which the probability that the torturing prevents the suffering does not decrease if the number of galaxies is increased (notably Bayesian models do have a decreasing probability for increasing numbers of galaxies because the prior will necessarily concentrate around some fixed number of galaxies), you should torture the baby.[9]
  3. ^ This 'leverage penalty' was first proposed by Robin Hanson in a comment on Yudkowsky's original statement of the problem;[12][13] Yudkowsky noted that this would imply refusing to believe in theories implying we can affect vastly many others, even in the face of what might otherwise look like overwhelming observational evidence for the theory.[14]

References edit

Citations edit

  1. ^ Häggström 2016, p. 82.
  2. ^ a b c d Bostrom 2009.
  3. ^ a b c Yudkowsky 2007.
  4. ^ Balfour 2021.
  5. ^ Russell 2022.
  6. ^ a b Holden Karnofsky, Why We Can’t Take Expected Value Estimates Literally (Even When They’re Unbiased). GiveWell Blog August 18, 2011 http://blog.givewell.org/2011/08/18/why-we-cant-take-expected-value-estimates-literally-even-when-theyre-unbiased/
  7. ^ a b De Blanc, Peter. Convergence of Expected Utilities with Algorithmic Probability Distributions (2007), arXiv:0712.4318
  8. ^ a b Kieran Marray, Dealing With Uncertainty in Ethical Calculations of Existential Risk, Presented at The Economic and Social Research Council Climate Ethics and Climate Economics Workshop Series: Workshop Five - Risk and the Culture of Science, May 2016 http://www.nottingham.ac.uk/climateethicseconomics/documents/papers-workshop-5/marray.pdf
  9. ^ Monton, Bradley (2019). "How to Avoid Maximizing Expected Utility". Philosophers' Imprint. 19 (18): 1–25. hdl:2027/spo.3521354.0019.018.
  10. ^ Bostrom, Nick (2014). "Choosing the Criteria for Choosing". Superintelligence: Paths, Dangers, Strategies. Oxford: Oxford University Press. ISBN 978-0199678112. "Decision Theory" section.
  11. ^ Cowen, Tyler; High, Jack (1988). "Time, Bounded Utility, and the St. Petersburg Paradox". Theory and Decision. 25 (3): 219–223. doi:10.1007/BF00133163. S2CID 120584258.
  12. ^ Robin Hanson (21 October 2007), on Eliezer Yudkowsky's "Pascal's Mugging: Tiny Probabilities of Vast Utilities", LessWrong: "People have been talking about assuming that states with many people hurt have a low (prior) probability. It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect."
  13. ^ Tomasik, Brian (June 2016). "How the Simulation Argument Dampens Future Fanaticism" (PDF). Center on Long-Term Risk. pp. 3–4. (PDF) from the original on 2021-11-23.
  14. ^ Yudkowsky, Eliezer (2013-05-08). "Pascal's Muggle: Infinitesimal Priors and Strong Evidence". LessWrong. from the original on 2016-06-27.
  15. ^ Baumann, Peter (2009). "Counting on numbers" (PDF). Analysis. 69 (3): 446–448. doi:10.1093/analys/anp061. JSTOR 40607656. (PDF) from the original on 2019-11-21.

Sources edit

pascal, mugging, philosophy, thought, experiment, demonstrating, problem, expected, utility, maximization, rational, agent, should, choose, actions, whose, outcomes, when, weighed, their, probability, have, higher, utility, some, very, unlikely, outcomes, have. In philosophy Pascal s mugging is a thought experiment demonstrating a problem in expected utility maximization A rational agent should choose actions whose outcomes when weighed by their probability have higher utility But some very unlikely outcomes may have very great utilities and these utilities can grow faster than the probability diminishes Hence the agent should focus more on vastly improbable cases with implausibly high rewards this leads first to counter intuitive choices and then to incoherence as the utility of every choice becomes unbounded The name refers to Pascal s Wager but unlike the wager it does not require infinite rewards 1 This sidesteps many objections to the Pascal s Wager dilemma that are based on the nature of infinity 2 Contents 1 Problem statement 2 Consequences and remedies 3 See also 4 Notes 5 References 5 1 Citations 6 SourcesProblem statement editThe term Pascal s mugging to refer to this problem was originally coined by Eliezer Yudkowsky in the LessWrong forum 3 2 Philosopher Nick Bostrom later elaborated the thought experiment in the form of a fictional dialogue 2 Subsequently other authors published their own sequels to the events of this first dialogue adopting the same literary style 4 5 In Bostrom s description 2 Blaise Pascal is accosted by a mugger who has forgotten their weapon However the mugger proposes a deal the philosopher gives them his wallet and in exchange the mugger will return twice the amount of money tomorrow Pascal declines pointing out that it is unlikely the deal will be honoured The mugger then continues naming higher rewards pointing out that even if it is just one chance in 1000 that they will be honourable it would make sense for Pascal to make a deal for a 2000 times return Pascal responds that the probability of that high return is even lower than one in 1000 The mugger argues back that for any low but strictly greater than 0 probability of being able to pay back a large amount of money or pure utility there exists a finite amount that makes it rational to take the bet In one example the mugger succeeds by promising Pascal 1 000 quadrillion happy days of life Convinced by the argument Pascal gives the mugger the wallet In one of Yudkowsky s examples the mugger succeeds by saying give me five dollars or I ll use my magic powers from outside the Matrix to run a Turing machine that simulates and kills 3 3 displaystyle 3 uparrow uparrow uparrow uparrow 3 nbsp people Here the number 3 3 displaystyle 3 uparrow uparrow uparrow uparrow 3 nbsp uses Knuth s up arrow notation writing the number out in base 10 would require enormously more writing material than there are atoms in the known universe 3 The supposed paradox results from two inconsistent views On the one side by multiplying an expected utility calculation assuming loss of five dollars to be valued at f displaystyle f nbsp loss of a life to be valued at l displaystyle l nbsp and probability that the mugger is telling the truth at t displaystyle t nbsp the solution is to give the money if and only if 3 3 t l gt f displaystyle 3 uparrow uparrow uparrow uparrow 3 times t times l gt f nbsp Assuming that l displaystyle l nbsp is higher than f displaystyle f nbsp so long as t displaystyle t nbsp is higher than 1 3 3 displaystyle 1 3 uparrow uparrow uparrow uparrow 3 nbsp which is assumed to be true note 1 it is considered rational to pay the mugger On the other side of the argument paying the mugger is intuitively irrational due to its exploitability If the person being mugged agrees to this sequence of logic then they can be exploited repeatedly for all of their money resulting in a Dutch book which is typically considered irrational Views on which of these arguments is logically correct differ 3 Moreover in many reasonable seeming decision systems Pascal s mugging causes the expected utility of any action to fail to converge as an unlimited chain of successively dire scenarios similar to Pascal s mugging would need to be factored in 7 8 Some of the arguments concerning this paradox affect not only the expected utility maximization theory but may also apply to other theoretical systems such as consequentialist ethics for example note 2 Consequences and remedies editPhilosopher Nick Bostrom argues that Pascal s mugging like Pascal s wager suggests that giving a superintelligent artificial intelligence a flawed decision theory could be disastrous 10 Pascal s mugging may also be relevant when considering low probability high stakes events such as existential risk or charitable interventions with a low probability of success but extremely high rewards Common sense seems to suggest that spending effort on too unlikely scenarios is irrational One advocated remedy might be to only use bounded utility functions rewards cannot be arbitrarily large 7 11 Another approach is to use Bayesian reasoning to qualitatively judge the quality of evidence and probability estimates rather than naively calculate expectations 6 Other approaches are to penalize the prior probability of hypotheses that argue that we are in a surprisingly unique position to affect large numbers of other people who cannot symmetrically affect us note 3 reject providing the probability of a payout first 15 or abandon quantitative decision procedures in the presence of extremely large risks 8 See also editDecision theory Expected utility Scope neglect St Petersburg paradoxNotes edit While it may seem very intuitive that the probability t displaystyle t nbsp cannot be as small as 3 3 1 displaystyle 3 uparrow uparrow uparrow uparrow 3 1 nbsp this is not necessarily true As a notable example it is false in Bayesian models for this problem 6 There the prior probability of the mugger s claim being correct decreases with the extraordinariness of his claim If his claim is as extraordinary as in this case its probability will also be extraordinarily small in a Bayesian model Furthermore a frequentist may estimate the probability of the mugger s threat being realized as equal to 0 since no realization of such an extraordinary threat has ever been observed A consequentialist version of Pascal s mugging was proposed by Bradley Monton as follows A strange person hands you a baby and asks you to torture it telling you that the torture will prevent significant unjust suffering of a large number of sentient creatures in some distant galaxy While the probability is very low that torturing the baby will prevent the suffering as long as the strange person makes the number of claimed distant galaxy creatures high enough according to consequentialism equipped with probabilistic models in which the probability that the torturing prevents the suffering does not decrease if the number of galaxies is increased notably Bayesian models do have a decreasing probability for increasing numbers of galaxies because the prior will necessarily concentrate around some fixed number of galaxies you should torture the baby 9 This leverage penalty was first proposed by Robin Hanson in a comment on Yudkowsky s original statement of the problem 12 13 Yudkowsky noted that this would imply refusing to believe in theories implying we can affect vastly many others even in the face of what might otherwise look like overwhelming observational evidence for the theory 14 References editCitations edit Haggstrom 2016 p 82 a b c d Bostrom 2009 a b c Yudkowsky 2007 Balfour 2021 Russell 2022 a b Holden Karnofsky Why We Can t Take Expected Value Estimates Literally Even When They re Unbiased GiveWell Blog August 18 2011 http blog givewell org 2011 08 18 why we cant take expected value estimates literally even when theyre unbiased a b De Blanc Peter Convergence of Expected Utilities with Algorithmic Probability Distributions 2007 arXiv 0712 4318 a b Kieran Marray Dealing With Uncertainty in Ethical Calculations of Existential Risk Presented at The Economic and Social Research Council Climate Ethics and Climate Economics Workshop Series Workshop Five Risk and the Culture of Science May 2016 http www nottingham ac uk climateethicseconomics documents papers workshop 5 marray pdf Monton Bradley 2019 How to Avoid Maximizing Expected Utility Philosophers Imprint 19 18 1 25 hdl 2027 spo 3521354 0019 018 Bostrom Nick 2014 Choosing the Criteria for Choosing Superintelligence Paths Dangers Strategies Oxford Oxford University Press ISBN 978 0199678112 Decision Theory section Cowen Tyler High Jack 1988 Time Bounded Utility and the St Petersburg Paradox Theory and Decision 25 3 219 223 doi 10 1007 BF00133163 S2CID 120584258 Robin Hanson 21 October 2007 comment on Eliezer Yudkowsky s Pascal s Mugging Tiny Probabilities of Vast Utilities LessWrong People have been talking about assuming that states with many people hurt have a low prior probability It might be more promising to assume that states with many people hurt have a low correlation with what any random person claims to be able to effect Tomasik Brian June 2016 How the Simulation Argument Dampens Future Fanaticism PDF Center on Long Term Risk pp 3 4 Archived PDF from the original on 2021 11 23 Yudkowsky Eliezer 2013 05 08 Pascal s Muggle Infinitesimal Priors and Strong Evidence LessWrong Archived from the original on 2016 06 27 Baumann Peter 2009 Counting on numbers PDF Analysis 69 3 446 448 doi 10 1093 analys anp061 JSTOR 40607656 Archived PDF from the original on 2019 11 21 Sources editBalfour Dylan 2021 Pascal s Mugger Strikes Again Utilitas 33 1 118 124 doi 10 1017 s0953820820000357 S2CID 229475903 Bostrom Nick 2009 Pascal s mugging PDF Analysis journal 69 3 443 445 doi 10 1093 analys anp062 JSTOR 40607655 Haggstrom Olle 2016 Here Be Dragons Science Technology and the Future of Humanity Oxford University Press doi 10 1093 acprof oso 9780198723547 001 0001 ISBN 978 0 19 872354 7 Russell Jeffrey Sanford December 2022 Planning for Pascal s Mugging PDF PhilPapers Yudkowsky Eliezer 19 October 2007 Pascal s Mugging Tiny Probabilities of Vast Utilities LessWrong a href Template Cite web html title Template Cite web cite web a CS1 maint ref duplicates default link Retrieved from https en wikipedia org w index php title Pascal 27s mugging amp oldid 1221251762, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.