fbpx
Wikipedia

Superintelligence

A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds. "Superintelligence" may also refer to a property of problem-solving systems (e.g., superintelligent language translators or engineering assistants) whether or not these high-level intellectual competencies are embodied in agents that act in the world. A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity.

University of Oxford philosopher Nick Bostrom defines superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest".[1] The program Fritz falls short of this conception of superintelligence—even though it is much better than humans at chess—because Fritz cannot outperform humans in other tasks.[2] Following Hutter and Legg, Bostrom treats superintelligence as general dominance at goal-oriented behavior, leaving open whether an artificial or human superintelligence would possess capacities such as intentionality (cf. the Chinese room argument) or first-person consciousness (cf. the hard problem of consciousness).

Technological researchers disagree about how likely present-day human intelligence is to be surpassed. Some argue that advances in artificial intelligence (AI) will probably result in general reasoning systems that lack human cognitive limitations. Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence.[3][4] A number of futures studies scenarios combine elements from both of these possibilities, suggesting that humans are likely to interface with computers, or upload their minds to computers, in a way that enables substantial intelligence amplification.

Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence. The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability, including the capacity of perfect recall, a vastly superior knowledge base, and the ability to multitask in ways not possible to biological entities. This may give them the opportunity to—either as a single being or as a new species—become much more powerful than humans, and to displace them.[1]

A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement, because of the potential social impact of such technologies.[5]

Feasibility of artificial superintelligence edit

 
Progress in machine classification of images
The error rate of AI by year; the red line represents the error rate of a trained human.

Philosopher David Chalmers argues that artificial general intelligence is a very likely path to artificial superintelligence (ASI). Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence, that it can be extended to surpass human intelligence, and that it can be further amplified to completely dominate humans across arbitrary tasks.[6]

Concerning human-level equivalence, Chalmers argues that the human brain is a mechanical system, and therefore ought to be emulatable by synthetic materials.[7] He also notes that human intelligence was able to biologically evolve, making it more likely that human engineers will be able to recapitulate this invention. Evolutionary algorithms in particular should be able to produce human-level AI.[8] Concerning intelligence extension and amplification, Chalmers argues that new AI technologies can generally be improved on, and that this is particularly likely when the invention can assist in designing new technologies.[9]

An AI system capable of self-improvement could enhance its own intelligence, thereby becoming more efficient at improving itself. This cycle of "recursive self-improvement" might cause an intelligence explosion, resulting in the creation of a superintelligence.[10]

Computer components already greatly surpass human performance in speed. Bostrom writes, "Biological neurons operate at a peak speed of about 200 Hz, a full seven orders of magnitude slower than a modern microprocessor (~2 GHz)."[11] Moreover, neurons transmit spike signals across axons at no greater than 120 m/s, "whereas existing electronic processing cores can communicate optically at the speed of light". Thus, the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain. A human-like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks, particularly ones that require haste or long strings of actions.

Another advantage of computers is modularity, that is, their size or computational capacity can be increased. A non-human (or modified human) brain could become much larger than a present-day human brain, like many supercomputers. Bostrom also raises the possibility of collective superintelligence: a large enough number of separate reasoning systems, if they communicated and coordinated well enough, could act in aggregate with far greater capabilities than any sub-agent.

There may also be ways to qualitatively improve on human reasoning and decision-making.[12] Humans outperform non-human animals in large part because of new or enhanced reasoning capacities, such as long-term planning and language use. (See evolution of human intelligence and primate cognition.) If there are other possible improvements to reasoning that would have a similarly large impact, this makes it more likely that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees.[13]

The above advantages hold for artificial superintelligence, but it is not clear how many hold for biological superintelligence. Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence. As such, writers on superintelligence have devoted much more attention to superintelligent AI scenarios.[14]

Feasibility of biological superintelligence edit

Carl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads, resulting in improvements via natural selection in the heritable component of human intelligence.[15] By contrast, Gerald Crabtree has argued that decreased selection pressure is resulting in a slow, centuries-long reduction in human intelligence, and that this process instead is likely to continue into the future. There is no scientific consensus concerning either possibility, and in both cases the biological change would be slow, especially relative to rates of cultural change.

Selective breeding, nootropics, epigenetic modulation, and genetic engineering could improve human intelligence more rapidly. Bostrom writes that if we come to understand the genetic component of intelligence, pre-implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain (if one embryo is selected out of two), or with larger gains (e.g., up to 24.3 IQ points gained if one embryo is selected out of 1000). If this process is iterated over many generations, the gains could be an order of magnitude improvement. Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly.[16] This notion, Iterated Embryo Selection, has received wide treatment from other authors.[17] A well-organized society of high-intelligence humans of this sort could potentially achieve collective superintelligence.[18]

Alternatively, collective intelligence might be constructible by better organizing humans at present levels of individual intelligence. A number of writers have suggested that human civilization, or some aspect of it (e.g., the Internet, or the economy), is coming to function like a global brain with capacities far exceeding its component agents. If this systems-based superintelligence relies heavily on artificial components, however, it may qualify as an AI rather than as a biology-based superorganism.[19] A prediction market is sometimes considered as an example of a working collective intelligence system, consisting of humans only (assuming algorithms are not used to inform decisions).[20]

A final method of intelligence amplification would be to directly enhance individual humans, as opposed to enhancing their social or reproductive dynamics. This could be achieved using nootropics, somatic gene therapy, or brain–computer interfaces. However, Bostrom expresses skepticism about the scalability of the first two approaches, and argues that designing a superintelligent cyborg interface is an AI-complete problem.[21]

Forecasts edit

Most surveyed AI researchers expect machines to eventually be able to rival humans in intelligence, though there is little consensus on when this will likely happen. At the 2006 AI@50 conference, 18% of attendees reported expecting machines to be able "to simulate learning and every other aspect of human intelligence" by 2056; 41% of attendees expected this to happen sometime after 2056; and 41% expected machines to never reach that milestone.[22]

In a survey of the 100 most cited authors in AI (as of May 2013, according to Microsoft academic search), the median year by which respondents expected machines "that can carry out most human professions at least as well as a typical human" (assuming no global catastrophe occurs) with 10% confidence is 2024 (mean 2034, st. dev. 33 years), with 50% confidence is 2050 (mean 2072, st. dev. 110 years), and with 90% confidence is 2070 (mean 2168, st. dev. 342 years). These estimates exclude the 1.2% of respondents who said no year would ever reach 10% confidence, the 4.1% who said 'never' for 50% confidence, and the 16.5% who said 'never' for 90% confidence. Respondents assigned a median 50% probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human-level machine intelligence.[23]

In a 2022 survey, the median year by which respondents expected "High-level machine intelligence" with 50% confidence is 2061. The survey defined the achievement of high-level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers.[24]

In 2023, OpenAI leaders published recommendations for the governance of superintelligence, which they believe may happen in less than 10 years.[25]

Design considerations edit

Bostrom expressed concern about what values a superintelligence should be designed to have. He compared several proposals:[26]

  • The coherent extrapolated volition (CEV) proposal is that it should have the values upon which humans would converge.
  • The moral rightness (MR) proposal is that it should value moral rightness.
  • The moral permissibility (MP) proposal is that it should value staying within the bounds of moral permissibility (and otherwise have CEV values).

Bostrom clarifies these terms:

instead of implementing humanity's coherent extrapolated volition, one could try to build an AI with the goal of doing what is morally right, relying on the AI's superior cognitive capacities to figure out just which actions fit that description. We can call this proposal "moral rightness" (MR) ... MR would also appear to have some disadvantages. It relies on the notion of "morally right," a notoriously difficult concept, one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis. Picking an erroneous explication of "moral rightness" could result in outcomes that would be morally very wrong ... The path to endowing an AI with any of these [moral] concepts might involve giving it general linguistic ability (comparable, at least, to that of a normal human adult). Such a general ability to understand natural language could then be used to understand what is meant by "morally right." If the AI could grasp the meaning, it could search for actions that fit ...[26]

One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility: the idea being that we could let the AI pursue humanity's CEV so long as it did not act in ways that are morally impermissible.[26]

Potential threat to humanity edit

It has been suggested that if AI systems rapidly become superintelligent, they may take unforeseen actions or out-compete humanity.[27] Researchers have argued that, by way of an "intelligence explosion," a self-improving AI could become so powerful as to be unstoppable by humans.[28]

Concerning human extinction scenarios, Bostrom (2002) identifies superintelligence as a possible cause:

When we create the first superintelligent entity, we might make a mistake and give it goals that lead it to annihilate humankind, assuming its enormous intellectual advantage gives it the power to do so. For example, we could mistakenly elevate a subgoal to the status of a supergoal. We tell it to solve a mathematical problem, and it complies by turning all the matter in the solar system into a giant calculating device, in the process killing the person who asked the question.

In theory, since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals, many uncontrolled, unintended consequences could arise. It could kill off all other agents, persuade them to change their behavior, or block their attempts at interference.[29] Eliezer Yudkowsky illustrates such instrumental convergence as follows: "The AI does not hate you, nor does it love you, but you are made out of atoms which it can use for something else."[30]

This presents the AI control problem: how to build an intelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators. The danger of not designing control right "the first time" is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down, in order to accomplish its goals.[31] Potential AI control strategies include "capability control" (limiting an AI's ability to influence the world) and "motivational control" (building an AI whose goals are aligned with human values).[32]

See also edit

References edit

  1. ^ a b Bostrom 2014, Chapter 2.
  2. ^ Bostrom 2014, p. 22.
  3. ^ Pearce, David (2012), Eden, Amnon H.; Moor, James H.; Søraker, Johnny H.; Steinhart, Eric (eds.), "The Biointelligence Explosion: How Recursively Self-Improving Organic Robots will Modify their Own Source Code and Bootstrap Our Way to Full-Spectrum Superintelligence", Singularity Hypotheses, The Frontiers Collection, Berlin, Heidelberg: Springer Berlin Heidelberg, pp. 199–238, doi:10.1007/978-3-642-32560-1_11, ISBN 978-3-642-32559-5, retrieved 2022-01-16
  4. ^ Gouveia, Steven S., ed. (2020). "ch. 4, "Humans and Intelligent Machines: Co-evolution, Fusion or Replacement?", David Pearce". The Age of Artificial Intelligence: An Exploration. Vernon Press. ISBN 978-1-62273-872-4.
  5. ^ Legg 2008, pp. 135–137.
  6. ^ Chalmers 2010, p. 7.
  7. ^ Chalmers 2010, p. 7-9.
  8. ^ Chalmers 2010, p. 10-11.
  9. ^ Chalmers 2010, p. 11-13.
  10. ^ "Clever cogs". The Economist. ISSN 0013-0613. Retrieved 2023-08-10.
  11. ^ Bostrom 2014, p. 59.
  12. ^ Yudkowsky, Eliezer (2013). Intelligence Explosion Microeconomics (PDF) (Technical report). Machine Intelligence Research Institute. p. 35. 2013-1.
  13. ^ Bostrom 2014, pp. 56–57.
  14. ^ Bostrom 2014, pp. 52, 59–61.
  15. ^ Sagan, Carl (1977). The Dragons of Eden. Random House.
  16. ^ Bostrom 2014, pp. 37–39.
  17. ^ Anomaly, Jonathan; Jones, Garett (2020). "Cognitive Enhancement and Network Effects: How Individual Prosperity Depends on Group Traits". Philosophia. 48 (5): 1753–1768. doi:10.1007/s11406-020-00189-3. S2CID 255167542.
  18. ^ Bostrom 2014, p. 39.
  19. ^ Bostrom 2014, pp. 48–49.
  20. ^ Watkins, Jennifer H. (2007), Prediction Markets as an Aggregation Mechanism for Collective Intelligence
  21. ^ Bostrom 2014, pp. 36–37, 42, 47.
  22. ^ Maker, Meg Houston (July 13, 2006). . Archived from the original on 2014-05-13.
  23. ^ Müller & Bostrom 2016, pp. 3–4, 6, 9–12.
  24. ^ "AI timelines: What do experts in artificial intelligence expect for the future?". Our World in Data. Retrieved 2023-08-09.
  25. ^ "Governance of superintelligence". openai.com. Retrieved 2023-05-30.
  26. ^ a b c Bostrom 2014, pp. 209–221.
  27. ^ Joy, Bill (April 1, 2000). "Why the future doesn't need us". Wired. See also technological singularity. Nick Bostrom, 2002 Ethical Issues in Advanced Artificial Intelligence.
  28. ^ Muehlhauser, Luke, and Louie Helm. 2012. "Intelligence Explosion and Machine Ethics." In Singularity Hypotheses: A Scientific and Philosophical Assessment, edited by Amnon Eden, Johnny Søraker, James H. Moor, and Eric Steinhart. Berlin, Germany: Springer.
  29. ^ Bostrom, Nick. 2003. "Ethical Issues in Advanced Artificial Intelligence." In Cognitive, Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence, edited by Iva Smit and George E. Lasker, pp. 12–17. Vol. 2. Windsor, Ontario, Canada: International Institute for Advanced Studies in Systems Research / Cybernetics.
  30. ^ Eliezer Yudkowsky (2008) in Artificial Intelligence as a Positive and Negative Factor in Global Risk.
  31. ^ Russell, Stuart (2016-05-17). "Should We Fear Supersmart Robots?". Scientific American. 314 (6): 58–59. Bibcode:2016SciAm.314f..58R. doi:10.1038/scientificamerican0616-58. ISSN 0036-8733. PMID 27196844.
  32. ^ Bostrom 2014, pp. 129–143.

Papers edit

  • Bostrom, Nick (2002), "Existential Risks", Journal of Evolution and Technology, 9, retrieved 2007-08-07.
  • Chalmers, David (2010). "The Singularity: A Philosophical Analysis" (PDF). Journal of Consciousness Studies. 17: 7–65.
  • Legg, Shane (2008). Machine Super Intelligence (PDF) (PhD). Department of Informatics, University of Lugano. Retrieved September 19, 2014.
  • Müller, Vincent C.; Bostrom, Nick (2016). "Future Progress in Artificial Intelligence: A Survey of Expert Opinion". In Müller, Vincent C. (ed.). Fundamental Issues of Artificial Intelligence. Springer. pp. 553–571.
  • Santos-Lang, Christopher (2014). "Our responsibility to manage evaluative diversity" (PDF). ACM SIGCAS Computers & Society. 44 (2): 16–19. doi:10.1145/2656870.2656874. S2CID 5649158.

Books edit

External links edit

  • Bill Gates Joins Stephen Hawking in Fears of a Coming Threat from "Superintelligence"
  • Will Superintelligent Machines Destroy Humanity?
  • Apple Co-founder Has Sense of Foreboding About Artificial Superintelligence

superintelligence, book, nick, bostrom, paths, dangers, strategies, 2020, film, film, superintelligence, hypothetical, agent, that, possesses, intelligence, surpassing, that, brightest, most, gifted, human, minds, also, refer, property, problem, solving, syste. For the book by Nick Bostrom see Superintelligence Paths Dangers Strategies For the 2020 film see Superintelligence film A superintelligence is a hypothetical agent that possesses intelligence far surpassing that of the brightest and most gifted human minds Superintelligence may also refer to a property of problem solving systems e g superintelligent language translators or engineering assistants whether or not these high level intellectual competencies are embodied in agents that act in the world A superintelligence may or may not be created by an intelligence explosion and associated with a technological singularity University of Oxford philosopher Nick Bostrom defines superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest 1 The program Fritz falls short of this conception of superintelligence even though it is much better than humans at chess because Fritz cannot outperform humans in other tasks 2 Following Hutter and Legg Bostrom treats superintelligence as general dominance at goal oriented behavior leaving open whether an artificial or human superintelligence would possess capacities such as intentionality cf the Chinese room argument or first person consciousness cf the hard problem of consciousness Technological researchers disagree about how likely present day human intelligence is to be surpassed Some argue that advances in artificial intelligence AI will probably result in general reasoning systems that lack human cognitive limitations Others believe that humans will evolve or directly modify their biology so as to achieve radically greater intelligence 3 4 A number of futures studies scenarios combine elements from both of these possibilities suggesting that humans are likely to interface with computers or upload their minds to computers in a way that enables substantial intelligence amplification Some researchers believe that superintelligence will likely follow shortly after the development of artificial general intelligence The first generally intelligent machines are likely to immediately hold an enormous advantage in at least some forms of mental capability including the capacity of perfect recall a vastly superior knowledge base and the ability to multitask in ways not possible to biological entities This may give them the opportunity to either as a single being or as a new species become much more powerful than humans and to displace them 1 A number of scientists and forecasters argue for prioritizing early research into the possible benefits and risks of human and machine cognitive enhancement because of the potential social impact of such technologies 5 Contents 1 Feasibility of artificial superintelligence 2 Feasibility of biological superintelligence 3 Forecasts 4 Design considerations 5 Potential threat to humanity 6 See also 7 References 7 1 Papers 7 2 Books 8 External linksFeasibility of artificial superintelligence edit nbsp Progress in machine classification of images The error rate of AI by year the red line represents the error rate of a trained human Philosopher David Chalmers argues that artificial general intelligence is a very likely path to artificial superintelligence ASI Chalmers breaks this claim down into an argument that AI can achieve equivalence to human intelligence that it can be extended to surpass human intelligence and that it can be further amplified to completely dominate humans across arbitrary tasks 6 Concerning human level equivalence Chalmers argues that the human brain is a mechanical system and therefore ought to be emulatable by synthetic materials 7 He also notes that human intelligence was able to biologically evolve making it more likely that human engineers will be able to recapitulate this invention Evolutionary algorithms in particular should be able to produce human level AI 8 Concerning intelligence extension and amplification Chalmers argues that new AI technologies can generally be improved on and that this is particularly likely when the invention can assist in designing new technologies 9 An AI system capable of self improvement could enhance its own intelligence thereby becoming more efficient at improving itself This cycle of recursive self improvement might cause an intelligence explosion resulting in the creation of a superintelligence 10 Computer components already greatly surpass human performance in speed Bostrom writes Biological neurons operate at a peak speed of about 200 Hz a full seven orders of magnitude slower than a modern microprocessor 2 GHz 11 Moreover neurons transmit spike signals across axons at no greater than 120 m s whereas existing electronic processing cores can communicate optically at the speed of light Thus the simplest example of a superintelligence may be an emulated human mind run on much faster hardware than the brain A human like reasoner that could think millions of times faster than current humans would have a dominant advantage in most reasoning tasks particularly ones that require haste or long strings of actions Another advantage of computers is modularity that is their size or computational capacity can be increased A non human or modified human brain could become much larger than a present day human brain like many supercomputers Bostrom also raises the possibility of collective superintelligence a large enough number of separate reasoning systems if they communicated and coordinated well enough could act in aggregate with far greater capabilities than any sub agent There may also be ways to qualitatively improve on human reasoning and decision making 12 Humans outperform non human animals in large part because of new or enhanced reasoning capacities such as long term planning and language use See evolution of human intelligence and primate cognition If there are other possible improvements to reasoning that would have a similarly large impact this makes it more likely that an agent can be built that outperforms humans in the same fashion humans outperform chimpanzees 13 The above advantages hold for artificial superintelligence but it is not clear how many hold for biological superintelligence Physiological constraints limit the speed and size of biological brains in many ways that are inapplicable to machine intelligence As such writers on superintelligence have devoted much more attention to superintelligent AI scenarios 14 Feasibility of biological superintelligence editCarl Sagan suggested that the advent of Caesarean sections and in vitro fertilization may permit humans to evolve larger heads resulting in improvements via natural selection in the heritable component of human intelligence 15 By contrast Gerald Crabtree has argued that decreased selection pressure is resulting in a slow centuries long reduction in human intelligence and that this process instead is likely to continue into the future There is no scientific consensus concerning either possibility and in both cases the biological change would be slow especially relative to rates of cultural change Selective breeding nootropics epigenetic modulation and genetic engineering could improve human intelligence more rapidly Bostrom writes that if we come to understand the genetic component of intelligence pre implantation genetic diagnosis could be used to select for embryos with as much as 4 points of IQ gain if one embryo is selected out of two or with larger gains e g up to 24 3 IQ points gained if one embryo is selected out of 1000 If this process is iterated over many generations the gains could be an order of magnitude improvement Bostrom suggests that deriving new gametes from embryonic stem cells could be used to iterate the selection process rapidly 16 This notion Iterated Embryo Selection has received wide treatment from other authors 17 A well organized society of high intelligence humans of this sort could potentially achieve collective superintelligence 18 Alternatively collective intelligence might be constructible by better organizing humans at present levels of individual intelligence A number of writers have suggested that human civilization or some aspect of it e g the Internet or the economy is coming to function like a global brain with capacities far exceeding its component agents If this systems based superintelligence relies heavily on artificial components however it may qualify as an AI rather than as a biology based superorganism 19 A prediction market is sometimes considered as an example of a working collective intelligence system consisting of humans only assuming algorithms are not used to inform decisions 20 A final method of intelligence amplification would be to directly enhance individual humans as opposed to enhancing their social or reproductive dynamics This could be achieved using nootropics somatic gene therapy or brain computer interfaces However Bostrom expresses skepticism about the scalability of the first two approaches and argues that designing a superintelligent cyborg interface is an AI complete problem 21 Forecasts editMost surveyed AI researchers expect machines to eventually be able to rival humans in intelligence though there is little consensus on when this will likely happen At the 2006 AI 50 conference 18 of attendees reported expecting machines to be able to simulate learning and every other aspect of human intelligence by 2056 41 of attendees expected this to happen sometime after 2056 and 41 expected machines to never reach that milestone 22 In a survey of the 100 most cited authors in AI as of May 2013 according to Microsoft academic search the median year by which respondents expected machines that can carry out most human professions at least as well as a typical human assuming no global catastrophe occurs with 10 confidence is 2024 mean 2034 st dev 33 years with 50 confidence is 2050 mean 2072 st dev 110 years and with 90 confidence is 2070 mean 2168 st dev 342 years These estimates exclude the 1 2 of respondents who said no year would ever reach 10 confidence the 4 1 who said never for 50 confidence and the 16 5 who said never for 90 confidence Respondents assigned a median 50 probability to the possibility that machine superintelligence will be invented within 30 years of the invention of approximately human level machine intelligence 23 In a 2022 survey the median year by which respondents expected High level machine intelligence with 50 confidence is 2061 The survey defined the achievement of high level machine intelligence as when unaided machines can accomplish every task better and more cheaply than human workers 24 In 2023 OpenAI leaders published recommendations for the governance of superintelligence which they believe may happen in less than 10 years 25 Design considerations editBostrom expressed concern about what values a superintelligence should be designed to have He compared several proposals 26 The coherent extrapolated volition CEV proposal is that it should have the values upon which humans would converge The moral rightness MR proposal is that it should value moral rightness The moral permissibility MP proposal is that it should value staying within the bounds of moral permissibility and otherwise have CEV values Bostrom clarifies these terms instead of implementing humanity s coherent extrapolated volition one could try to build an AI with the goal of doing what is morally right relying on the AI s superior cognitive capacities to figure out just which actions fit that description We can call this proposal moral rightness MR MR would also appear to have some disadvantages It relies on the notion of morally right a notoriously difficult concept one with which philosophers have grappled since antiquity without yet attaining consensus as to its analysis Picking an erroneous explication of moral rightness could result in outcomes that would be morally very wrong The path to endowing an AI with any of these moral concepts might involve giving it general linguistic ability comparable at least to that of a normal human adult Such a general ability to understand natural language could then be used to understand what is meant by morally right If the AI could grasp the meaning it could search for actions that fit 26 One might try to preserve the basic idea of the MR model while reducing its demandingness by focusing on moral permissibility the idea being that we could let the AI pursue humanity s CEV so long as it did not act in ways that are morally impermissible 26 Potential threat to humanity editMain articles Existential risk from artificial general intelligence AI alignment and AI safety It has been suggested that if AI systems rapidly become superintelligent they may take unforeseen actions or out compete humanity 27 Researchers have argued that by way of an intelligence explosion a self improving AI could become so powerful as to be unstoppable by humans 28 Concerning human extinction scenarios Bostrom 2002 identifies superintelligence as a possible cause When we create the first superintelligent entity we might make a mistake and give it goals that lead it to annihilate humankind assuming its enormous intellectual advantage gives it the power to do so For example we could mistakenly elevate a subgoal to the status of a supergoal We tell it to solve a mathematical problem and it complies by turning all the matter in the solar system into a giant calculating device in the process killing the person who asked the question In theory since a superintelligent AI would be able to bring about almost any possible outcome and to thwart any attempt to prevent the implementation of its goals many uncontrolled unintended consequences could arise It could kill off all other agents persuade them to change their behavior or block their attempts at interference 29 Eliezer Yudkowsky illustrates such instrumental convergence as follows The AI does not hate you nor does it love you but you are made out of atoms which it can use for something else 30 This presents the AI control problem how to build an intelligent agent that will aid its creators while avoiding inadvertently building a superintelligence that will harm its creators The danger of not designing control right the first time is that a superintelligence may be able to seize power over its environment and prevent humans from shutting it down in order to accomplish its goals 31 Potential AI control strategies include capability control limiting an AI s ability to influence the world and motivational control building an AI whose goals are aligned with human values 32 See also editAI safety AI takeover Artificial brain Artificial intelligence arms race Effective altruism Ethics of artificial intelligence Existential risk Friendly artificial intelligence Future of Humanity Institute Robotics Intelligent agent Machine ethics Machine Intelligence Research Institute Machine learning Neural scaling law Law in machine learning Noosphere Philosophical concept of biosphere successor via humankind s rational activities Outline of artificial intelligence Posthumanism Self replication Self replicating machine Superintelligence Paths Dangers StrategiesReferences edit a b Bostrom 2014 Chapter 2 Bostrom 2014 p 22 Pearce David 2012 Eden Amnon H Moor James H Soraker Johnny H Steinhart Eric eds The Biointelligence Explosion How Recursively Self Improving Organic Robots will Modify their Own Source Code and Bootstrap Our Way to Full Spectrum Superintelligence Singularity Hypotheses The Frontiers Collection Berlin Heidelberg Springer Berlin Heidelberg pp 199 238 doi 10 1007 978 3 642 32560 1 11 ISBN 978 3 642 32559 5 retrieved 2022 01 16 Gouveia Steven S ed 2020 ch 4 Humans and Intelligent Machines Co evolution Fusion or Replacement David Pearce The Age of Artificial Intelligence An Exploration Vernon Press ISBN 978 1 62273 872 4 Legg 2008 pp 135 137 Chalmers 2010 p 7 Chalmers 2010 p 7 9 Chalmers 2010 p 10 11 Chalmers 2010 p 11 13 Clever cogs The Economist ISSN 0013 0613 Retrieved 2023 08 10 Bostrom 2014 p 59 Yudkowsky Eliezer 2013 Intelligence Explosion Microeconomics PDF Technical report Machine Intelligence Research Institute p 35 2013 1 Bostrom 2014 pp 56 57 Bostrom 2014 pp 52 59 61 Sagan Carl 1977 The Dragons of Eden Random House Bostrom 2014 pp 37 39 Anomaly Jonathan Jones Garett 2020 Cognitive Enhancement and Network Effects How Individual Prosperity Depends on Group Traits Philosophia 48 5 1753 1768 doi 10 1007 s11406 020 00189 3 S2CID 255167542 Bostrom 2014 p 39 Bostrom 2014 pp 48 49 Watkins Jennifer H 2007 Prediction Markets as an Aggregation Mechanism for Collective Intelligence Bostrom 2014 pp 36 37 42 47 Maker Meg Houston July 13 2006 AI 50 First Poll Archived from the original on 2014 05 13 Muller amp Bostrom 2016 pp 3 4 6 9 12 AI timelines What do experts in artificial intelligence expect for the future Our World in Data Retrieved 2023 08 09 Governance of superintelligence openai com Retrieved 2023 05 30 a b c Bostrom 2014 pp 209 221 Joy Bill April 1 2000 Why the future doesn t need us Wired See also technological singularity Nick Bostrom 2002 Ethical Issues in Advanced Artificial Intelligence Muehlhauser Luke and Louie Helm 2012 Intelligence Explosion and Machine Ethics In Singularity Hypotheses A Scientific and Philosophical Assessment edited by Amnon Eden Johnny Soraker James H Moor and Eric Steinhart Berlin Germany Springer Bostrom Nick 2003 Ethical Issues in Advanced Artificial Intelligence In Cognitive Emotive and Ethical Aspects of Decision Making in Humans and in Artificial Intelligence edited by Iva Smit and George E Lasker pp 12 17 Vol 2 Windsor Ontario Canada International Institute for Advanced Studies in Systems Research Cybernetics Eliezer Yudkowsky 2008 in Artificial Intelligence as a Positive and Negative Factor in Global Risk Russell Stuart 2016 05 17 Should We Fear Supersmart Robots Scientific American 314 6 58 59 Bibcode 2016SciAm 314f 58R doi 10 1038 scientificamerican0616 58 ISSN 0036 8733 PMID 27196844 Bostrom 2014 pp 129 143 Papers edit Bostrom Nick 2002 Existential Risks Journal of Evolution and Technology 9 retrieved 2007 08 07 Chalmers David 2010 The Singularity A Philosophical Analysis PDF Journal of Consciousness Studies 17 7 65 Legg Shane 2008 Machine Super Intelligence PDF PhD Department of Informatics University of Lugano Retrieved September 19 2014 Muller Vincent C Bostrom Nick 2016 Future Progress in Artificial Intelligence A Survey of Expert Opinion In Muller Vincent C ed Fundamental Issues of Artificial Intelligence Springer pp 553 571 Santos Lang Christopher 2014 Our responsibility to manage evaluative diversity PDF ACM SIGCAS Computers amp Society 44 2 16 19 doi 10 1145 2656870 2656874 S2CID 5649158 Books edit Hibbard Bill 2002 Super Intelligent Machines Kluwer Academic Plenum Publishers Bostrom Nick 2014 Superintelligence Paths Dangers Strategies Oxford University Press Tegmark Max 2018 Life 3 0 being human in the age of artificial intelligence London England ISBN 978 0 14 198180 2 OCLC 1018461467 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link Russell Stuart J 2019 Human compatible artificial intelligence and the problem of control New York ISBN 978 0 525 55861 3 OCLC 1113410915 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link Sanders Nada R 2020 The humachine humankind machines and the future of enterprise John D Wood First ed New York New York ISBN 978 0 429 00117 8 OCLC 1119391268 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link External links editBill Gates Joins Stephen Hawking in Fears of a Coming Threat from Superintelligence Will Superintelligent Machines Destroy Humanity Apple Co founder Has Sense of Foreboding About Artificial Superintelligence Retrieved from https en wikipedia org w index php title Superintelligence amp oldid 1223701110 Feasibility of artificial superintelligence, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.