fbpx
Wikipedia

Stochastic parrot

In machine learning, a stochastic parrot is a term highlighting the opinion that large language models, even though they are good at generating convincing language, do not actually understand the meaning of the language being processed.[1][2] The term was coined by Emily M. Bender[2][3] in the 2021 artificial intelligence research paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell.[4]

Definition and implications edit

A stochastic parrot, according to Bender, is an entity "for haphazardly stitching together sequences of linguistic forms … according to probabilistic information about how they combine, but without any reference to meaning."[3] (A stochastic process is one whose outcome is random.)

More formally, the term refers to "large language models that are impressive in their ability to generate realistic-sounding language but ultimately do not truly understand the meaning of the language they are processing."[2]

According to Lindholm, et. al., the analogy highlights two vital limitations:[1]

  1. The predictions made by a learning machine are essentially repeating back the contents of the data, with some added noise (or stochasticity) caused by the limitations of the model.
  2. The machine learning algorithm does not understand the problem it has learnt. It can't know when it is repeating something incorrect, out of context, or socially inappropriate.

They go on to note that because of these limitations, a learning machine might produce results which are "dangerously wrong".[1]

Origin edit

The term was first used in the paper "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" by Bender, Timnit Gebru, Angelina McMillan-Major, and Margaret Mitchell (using the pseudonym "Shmargaret Shmitchell").[4] The paper covered the risks of very large language models, regarding their environmental and financial costs, inscrutability leading to unknown dangerous biases, the inability of the models to understand the concepts underlying what they learn, and the potential for using them to deceive people.[5] The paper and subsequent events resulted in Gebru and Mitchell losing their jobs at Google, and a subsequent protest by Google employees.[6][7]

Subsequent usage edit

In July 2021, the Alan Turing Institute hosted a keynote and panel discussion on the paper.[8] As of May 2023, the paper has been cited in 1,529 publications.[9] The term has been used in publications in the fields of law,[10] grammar,[11] narrative,[12] and humanities.[13] The authors continue to maintain their concerns about the dangers of chatbots based on large language models, such as GPT-4.[14]

See also edit

References edit

  1. ^ a b c Lindholm et al. 2022, pp. 322–3.
  2. ^ a b c Uddin, Muhammad Saad (April 20, 2023). "Stochastic Parrots: A Novel Look at Large Language Models and Their Limitations". Towards AI. Retrieved 2023-05-12.
  3. ^ a b Weil, Elizabeth (March 1, 2023). "You Are Not a Parrot". New York. Retrieved 2023-05-12.
  4. ^ a b Bender, Emily M.; Gebru, Timnit; McMillan-Major, Angelina; Shmitchell, Shmargaret (2021-03-01). "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜". Proceedings of the 2021 ACM Conference on Fairness, Accountability, and Transparency. FAccT '21. New York, NY, USA: Association for Computing Machinery. pp. 610–623. doi:10.1145/3442188.3445922. ISBN 978-1-4503-8309-7. S2CID 232040593.
  5. ^ Haoarchive, Karen (4 December 2020). "We read the paper that forced Timnit Gebru out of Google. Here's what it says". MIT Technology Review. from the original on 6 October 2021. Retrieved 19 January 2022.
  6. ^ Lyons, Kim (5 December 2020). "Timnit Gebru's actual paper may explain why Google ejected her". The Verge.
  7. ^ Taylor, Paul (2021-02-12). "Stochastic Parrots". London Review of Books. Retrieved 2023-05-09.
  8. ^ Weller (2021).
  9. ^ "Bender: On the Dangers of Stochastic Parrots". Google Scholar. Retrieved 2023-05-12.
  10. ^ Arnaudo, Luca (April 20, 2023). "Artificial Intelligence, Capabilities, Liabilities: Interactions in the Shadows of Regulation, Antitrust – And Family Law". SSRN. doi:10.2139/ssrn.4424363. S2CID 258636427.
  11. ^ Bleackley, Pete; BLOOM (2023). "In the Cage with the Stochastic Parrot". Speculative Grammarian. CXCII (3). Retrieved 2023-05-13.
  12. ^ Gáti, Daniella (2023). "Theorizing Mathematical Narrative through Machine Learning". Journal of Narrative Theory. Project MUSE. 53 (1): 139–165. doi:10.1353/jnt.2023.0003. S2CID 257207529.
  13. ^ Rees, Tobias (2022). "Non-Human Words: On GPT-3 as a Philosophical Laboratory". Daedalus. 151 (2): 168–82. doi:10.1162/daed_a_01908. JSTOR 48662034. S2CID 248377889.
  14. ^ Goldman, Sharon (March 20, 2023). "With GPT-4, dangers of 'Stochastic Parrots' remain, say researchers. No wonder OpenAI CEO is a 'bit scared'". VentureBeat. Retrieved 2023-05-09.

Works cited edit

  • Lindholm, A.; Wahlström, N.; Lindsten, F.; Schön, T. B. (2022). Machine Learning: A First Course for Engineers and Scientists. Cambridge University Press. ISBN 978-1108843607.
  • Weller, Adrian (July 13, 2021). On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜 (video). Alan Turing Institute. Keynote by Emily Bender. The presentation was followed by a panel discussion.

Further reading edit

  • Bogost, Ian (December 7, 2022). "ChatGPT Is Dumber Than You Think: Treat it like a toy, not a tool". The Atlantic. Retrieved 2024-01-17.
  • Chomsky, Noam (March 8, 2023). "The False Promise of ChatGPT". The New York Times. Retrieved 2024-01-17.
  • Glenberg, Arthur; Jones, Cameron Robert (April 6, 2023). "It takes a body to understand the world – why ChatGPT and other language AIs don't know what they're saying". The Conversation. Retrieved 2024-01-17.
  • McQuillan, D. (2022). Resisting AI: An Anti-fascist Approach to Artificial Intelligence. Bristol University Press. ISBN 978-1529213508.
  • Thompson, E. (2022). Escape from Model Land: How Mathematical Models Can Lead Us Astray and What We Can Do about It. Basic Books. ISBN 978-1541600980.
  • Zhong, Qihuang; Ding, Liang; Liu, Juhua; Du, Bo; Tao, Dacheng (2023). "Can ChatGPT Understand Too? A Comparative Study on ChatGPT and Fine-tuned BERT". arXiv:2302.10198 [cs.CL].

External links edit

  • "On the Dangers of Stochastic Parrots: Can Language Models Be Too Big? 🦜" at Wikimedia Commons

stochastic, parrot, machine, learning, stochastic, parrot, term, highlighting, opinion, that, large, language, models, even, though, they, good, generating, convincing, language, actually, understand, meaning, language, being, processed, term, coined, emily, b. In machine learning a stochastic parrot is a term highlighting the opinion that large language models even though they are good at generating convincing language do not actually understand the meaning of the language being processed 1 2 The term was coined by Emily M Bender 2 3 in the 2021 artificial intelligence research paper On the Dangers of Stochastic Parrots Can Language Models Be Too Big by Bender Timnit Gebru Angelina McMillan Major and Margaret Mitchell 4 Contents 1 Definition and implications 2 Origin 3 Subsequent usage 4 See also 5 References 5 1 Works cited 6 Further reading 7 External linksDefinition and implications editA stochastic parrot according to Bender is an entity for haphazardly stitching together sequences of linguistic forms according to probabilistic information about how they combine but without any reference to meaning 3 A stochastic process is one whose outcome is random More formally the term refers to large language models that are impressive in their ability to generate realistic sounding language but ultimately do not truly understand the meaning of the language they are processing 2 According to Lindholm et al the analogy highlights two vital limitations 1 The predictions made by a learning machine are essentially repeating back the contents of the data with some added noise or stochasticity caused by the limitations of the model The machine learning algorithm does not understand the problem it has learnt It can t know when it is repeating something incorrect out of context or socially inappropriate They go on to note that because of these limitations a learning machine might produce results which are dangerously wrong 1 Origin editThe term was first used in the paper On the Dangers of Stochastic Parrots Can Language Models Be Too Big by Bender Timnit Gebru Angelina McMillan Major and Margaret Mitchell using the pseudonym Shmargaret Shmitchell 4 The paper covered the risks of very large language models regarding their environmental and financial costs inscrutability leading to unknown dangerous biases the inability of the models to understand the concepts underlying what they learn and the potential for using them to deceive people 5 The paper and subsequent events resulted in Gebru and Mitchell losing their jobs at Google and a subsequent protest by Google employees 6 7 Subsequent usage editIn July 2021 the Alan Turing Institute hosted a keynote and panel discussion on the paper 8 As of May 2023 update the paper has been cited in 1 529 publications 9 The term has been used in publications in the fields of law 10 grammar 11 narrative 12 and humanities 13 The authors continue to maintain their concerns about the dangers of chatbots based on large language models such as GPT 4 14 See also edit1 the Road AI generated novel Chinese room Criticism of artificial neural networks Criticism of deep learning Criticism of Google Cut up technique Infinite monkey theorem Generative AI List of important publications in computer science Markov text Stochastic parsingReferences edit a b c Lindholm et al 2022 pp 322 3 a b c Uddin Muhammad Saad April 20 2023 Stochastic Parrots A Novel Look at Large Language Models and Their Limitations Towards AI Retrieved 2023 05 12 a b Weil Elizabeth March 1 2023 You Are Not a Parrot New York Retrieved 2023 05 12 a b Bender Emily M Gebru Timnit McMillan Major Angelina Shmitchell Shmargaret 2021 03 01 On the Dangers of Stochastic Parrots Can Language Models Be Too Big Proceedings of the 2021 ACM Conference on Fairness Accountability and Transparency FAccT 21 New York NY USA Association for Computing Machinery pp 610 623 doi 10 1145 3442188 3445922 ISBN 978 1 4503 8309 7 S2CID 232040593 Haoarchive Karen 4 December 2020 We read the paper that forced Timnit Gebru out of Google Here s what it says MIT Technology Review Archived from the original on 6 October 2021 Retrieved 19 January 2022 Lyons Kim 5 December 2020 Timnit Gebru s actual paper may explain why Google ejected her The Verge Taylor Paul 2021 02 12 Stochastic Parrots London Review of Books Retrieved 2023 05 09 Weller 2021 Bender On the Dangers of Stochastic Parrots Google Scholar Retrieved 2023 05 12 Arnaudo Luca April 20 2023 Artificial Intelligence Capabilities Liabilities Interactions in the Shadows of Regulation Antitrust And Family Law SSRN doi 10 2139 ssrn 4424363 S2CID 258636427 Bleackley Pete BLOOM 2023 In the Cage with the Stochastic Parrot Speculative Grammarian CXCII 3 Retrieved 2023 05 13 Gati Daniella 2023 Theorizing Mathematical Narrative through Machine Learning Journal of Narrative Theory Project MUSE 53 1 139 165 doi 10 1353 jnt 2023 0003 S2CID 257207529 Rees Tobias 2022 Non Human Words On GPT 3 as a Philosophical Laboratory Daedalus 151 2 168 82 doi 10 1162 daed a 01908 JSTOR 48662034 S2CID 248377889 Goldman Sharon March 20 2023 With GPT 4 dangers of Stochastic Parrots remain say researchers No wonder OpenAI CEO is a bit scared VentureBeat Retrieved 2023 05 09 Works cited edit Lindholm A Wahlstrom N Lindsten F Schon T B 2022 Machine Learning A First Course for Engineers and Scientists Cambridge University Press ISBN 978 1108843607 Weller Adrian July 13 2021 On the Dangers of Stochastic Parrots Can Language Models Be Too Big video Alan Turing Institute Keynote by Emily Bender The presentation was followed by a panel discussion Further reading editBogost Ian December 7 2022 ChatGPT Is Dumber Than You Think Treat it like a toy not a tool The Atlantic Retrieved 2024 01 17 Chomsky Noam March 8 2023 The False Promise of ChatGPT The New York Times Retrieved 2024 01 17 Glenberg Arthur Jones Cameron Robert April 6 2023 It takes a body to understand the world why ChatGPT and other language AIs don t know what they re saying The Conversation Retrieved 2024 01 17 McQuillan D 2022 Resisting AI An Anti fascist Approach to Artificial Intelligence Bristol University Press ISBN 978 1529213508 Thompson E 2022 Escape from Model Land How Mathematical Models Can Lead Us Astray and What We Can Do about It Basic Books ISBN 978 1541600980 Zhong Qihuang Ding Liang Liu Juhua Du Bo Tao Dacheng 2023 Can ChatGPT Understand Too A Comparative Study on ChatGPT and Fine tuned BERT arXiv 2302 10198 cs CL External links edit On the Dangers of Stochastic Parrots Can Language Models Be Too Big at Wikimedia Commons Retrieved from https en wikipedia org w index php title Stochastic parrot amp oldid 1205153175, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.