fbpx
Wikipedia

AI takeover

An AI takeover is a hypothetical scenario in which an artificial intelligence (AI) becomes the dominant form of intelligence on Earth, as computer programs or robots effectively take the control of the planet away from the human species. Possible scenarios include replacement of the entire human workforce, takeover by a superintelligent AI, and the popular notion of a robot uprising. Some public figures, such as Stephen Hawking and Elon Musk, have advocated research into precautionary measures to ensure future superintelligent machines remain under human control.[1]

Robots revolt in R.U.R., a 1920 play

Types

Automation of the economy

The traditional consensus among economists has been that technological progress does not cause long-term unemployment. However, recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete, leaving people in various sectors without jobs to earn a living, leading to an economic crisis.[2][3][4][5] Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology, and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology.[6]

Technologies that may displace workers

AI technologies have been widely adopted in recent years, and this trend will only continue to gain popularity given the digital transformation efforts from companies across the world. While these technologies have replaced many traditional workers, they also create new opportunities. Industries that are most susceptible to experience AI takeover include transportation, retail, and military. AI military technologies, for example, allow soldiers to work remotely without any risk of injury. Author Dave Bond argues that as AI technologies continue to develop and expand, the relationship between humans and robots will change; they will become closely integrated in several aspects of life. Overall, it is safe to assume that AI will displace some workers while creating opportunities for new jobs in other sectors, especially in fields where tasks are repeatable.[7][8]

Computer-integrated manufacturing

Computer-integrated manufacturing is the manufacturing approach of using computers to control the entire production process. This integration allows individual processes to exchange information with each other and initiate actions. Although manufacturing can be faster and less error-prone by the integration of computers, the main advantage is the ability to create automated manufacturing processes. Computer-integrated manufacturing is used in automotive, aviation, space, and ship building industries.

White-collar machines

The 21st century has seen a variety of skilled tasks partially taken over by machines, including translation, legal research and even low level journalism. Care work, entertainment, and other tasks requiring empathy, previously thought safe from automation, have also begun to be performed by robots.[9][10][11][12]

Autonomous cars

An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input. Many such vehicles are being developed, but as of May 2017 automated cars permitted on public roads are not yet fully autonomous. They all require a human driver at the wheel who is ready at a moment's notice to take control of the vehicle. Among the main obstacles to widespread adoption of autonomous vehicles, are concerns about the resulting loss of driving-related jobs in the road transport industry. On March 18, 2018, the first human was killed by an autonomous vehicle in Tempe, Arizona by an Uber self-driving car.[13]

Eradication

Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible, stating "there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains".[14][15] Scholars like Nick Bostrom debate how far off superhuman intelligence is, and whether it would actually pose a risk to mankind. According to Bostrom, a superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals; taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine's plans. As an oversimplified example, a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world's resources to create as many paperclips as possible, and, additionally, prevent humans from shutting it down or using those resources on things other than paperclips.[16]

In fiction

AI takeover is a common theme in science fiction. Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans, as opposed to the researchers' concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals.[17] The idea is seen in Karel Čapek's R.U.R., which introduced the word robot to the global lexicon in 1921,[18] and can even be glimpsed in Mary Shelley's Frankenstein (published in 1818), as Victor ponders whether, if he grants his monster's request and makes him a wife, they would reproduce and their kind would destroy humanity.[19]

The word "robot" from R.U.R. comes from the Czech word, robota, meaning laborer or serf. The 1920 play was a protest against the rapid growth of technology, featuring manufactured "robots" with increasing capabilities who eventually revolt.[20] HAL 9000 (1968) and the original Terminator (1984) are two iconic examples of hostile AI in pop culture.[21]

Contributing factors

Advantages of superhuman intelligence over humans

Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence. If its self-reprogramming leads to its getting even better at being able to reprogram itself, the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind. Bostrom defines a superintelligence as "any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest", and enumerates some advantages a superintelligence would have if it chose to compete against humans:[17][22]

  • Technology research: A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology.
  • Strategizing: A superintelligence might be able to simply outwit human opposition.
  • Social manipulation: A superintelligence might be able to recruit human support,[17] or covertly incite a war between humans.[23]
  • Economic productivity: As long as a copy of the AI could produce more economic wealth than the cost of its hardware, individual humans would have an incentive to voluntarily allow the Artificial General Intelligence (AGI) to run a copy of itself on their systems.
  • Hacking: A superintelligence could find new exploits in computers connected to the Internet, and spread copies of itself onto those systems, or might steal money to finance its plans.

Sources of AI advantage

According to Bostrom, a computer program that faithfully emulates a human brain, or that otherwise runs algorithms that are equally powerful as the human brain's algorithms, could still become a "speed superintelligence" if it can think many orders of magnitude faster than a human, due to being made of silicon rather than flesh, or due to optimization focusing on increasing the speed of the AGI. Biological neurons operate at about 200 Hz, whereas a modern microprocessor operates at a speed of about 2,000,000,000 Hz. Human axons carry action potentials at around 120 m/s, whereas computer signals travel near the speed of light.[17]

A network of human-level intelligences designed to network together and share complex thoughts and memories seamlessly, able to collectively work as a giant unified team without friction, or consisting of trillions of human-level intelligences, would become a "collective superintelligence".[17]

More broadly, any number of qualitative improvements to a human-level AGI could result in a "quality superintelligence", perhaps resulting in an AGI as far above us in intelligence as humans are above non-human apes. The number of neurons in a human brain is limited by cranial volume and metabolic constraints, while the number of processors in a supercomputer can be indefinitely expanded. An AGI need not be limited by human constraints on working memory, and might therefore be able to intuitively grasp more complex relationships than humans can. An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields, compared with humans who evolved no specialized mental modules to specifically deal with those domains. Unlike humans, an AGI can spawn copies of itself and tinker with its copies' source code to attempt to further improve its algorithms.[17]

Possibility of unfriendly AI preceding friendly AI

Is strong AI inherently dangerous?

A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI. While both require large advances in recursive optimisation process design, friendly AI also requires the ability to make goal structures invariant under self-improvement (or the AI could transform itself into something unfriendly) and a goal structure that aligns with human values and does not undergo instrumental convergence in ways that may automatically destroy the entire human race. An unfriendly AI, on the other hand, can optimize for an arbitrary goal structure, which does not need to be invariant under self-modification.[24]

The sheer complexity of human value systems makes it very difficult to make AI's motivations human-friendly.[17][25] Unless moral philosophy provides us with a flawless ethical theory, an AI's utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not "common sense". According to Eliezer Yudkowsky, there is little reason to suppose that an artificially designed mind would have such an adaptation.[26]

Odds of conflict

Many scholars, including evolutionary psychologist Steven Pinker, argue that a superintelligent machine is likely to coexist peacefully with humans.[27]

The fear of cybernetic revolt is often based on interpretations of humanity's history, which is rife with incidents of enslavement and genocide. Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being's goal system. However, such human competitiveness stems from the evolutionary background to our intelligence, where the survival and reproduction of genes in the face of human and non-human competitors was the central goal.[28] According to AI researcher Steve Omohundro, an arbitrary intelligence could have arbitrary goals: there is no particular reason that an artificially intelligent machine (not sharing humanity's evolutionary context) would be hostile—or friendly—unless its creator programs it to be such and it is not inclined or capable of modifying its programming. But the question remains: what would happen if AI systems could interact and evolve (evolution in this context means self-modification or selection and reproduction) and need to compete over resources—would that create goals of self-preservation? AI's goal of self-preservation could be in conflict with some goals of humans.[29]

Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix, arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it. Pinker acknowledges the possibility of deliberate "bad actors", but states that in the absence of bad actors, unanticipated accidents are not a significant threat; Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence.[27] In contrast, Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well-being (as in the film I, Robot and in the short story "The Evitable Conflict"). Omohundro suggests that present-day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions (say, playing chess at all costs), leading them to seek self-preservation and elimination of obstacles, including humans who might turn them off.[30]

Precautions

The AI control problem is the issue of how to build a superintelligent agent that will aid its creators, while avoiding inadvertently building a superintelligence that will harm its creators.[31] Some scholars argue that solutions to the control problem might also find applications in existing non-superintelligent AI.[32]

Major approaches to the control problem include alignment, which aims to align AI goal systems with human values, and capability control, which aims to reduce an AI system's capacity to harm humans or gain control. An example of "capability control" is to research whether a superintelligence AI could be successfully confined in an "AI box". According to Bostrom, such capability control proposals are not reliable or sufficient to solve the control problem in the long term, but may potentially act as valuable supplements to alignment efforts.[17]

Warnings

Physicist Stephen Hawking, Microsoft founder Bill Gates, and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it, with Hawking theorizing that this could "spell the end of the human race".[33] Stephen Hawking said in 2014 that "Success in creating AI would be the biggest event in human history. Unfortunately, it might also be the last, unless we learn how to avoid the risks." Hawking believed that in the coming decades, AI could offer "incalculable benefits and risks" such as "technology outsmarting financial markets, out-inventing human researchers, out-manipulating human leaders, and developing weapons we cannot even understand." In January 2015, Nick Bostrom joined Stephen Hawking, Max Tegmark, Elon Musk, Lord Martin Rees, Jaan Tallinn, and numerous AI researchers in signing the Future of Life Institute's open letter speaking to the potential risks and benefits associated with artificial intelligence. The signatories "believe that research on how to make AI systems robust and beneficial is both important and timely, and that there are concrete research directions that can be pursued today."[34][35]

Prevention through AI alignment

In the field of artificial intelligence (AI), AI alignment research aims to steer AI systems towards their designers’ intended goals and interests.[a] An aligned AI system advances the intended objective; a misaligned AI system is competent at advancing some objective, but not the intended one.[b]

See also

Notes

  1. ^ Other definitions of AI alignment require that the AI system advances more general goals such as human values, other ethical principles, or the intentions its designers would have if they were more informed and enlightened.[36]
  2. ^ See the textbook: Russel & Norvig, Artificial Intelligence: A Modern Approach.[37] The distinction between misaligned AI and incompetent AI has been formalized in certain contexts.[38]

References

  1. ^ Lewis, Tanya (2015-01-12). "Don't Let Artificial Intelligence Take Over, Top Scientists Warn". LiveScience. Purch. Retrieved October 20, 2015. Stephen Hawking, Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence (AI).
  2. ^ Lee, Kai-Fu (2017-06-24). "The Real Threat of Artificial Intelligence". The New York Times. Retrieved 2017-08-15. These tools can outperform human beings at a given task. This kind of A.I. is spreading to thousands of domains, and as it does, it will eliminate many jobs.
  3. ^ Larson, Nina (2017-06-08). "AI 'good for the world'... says ultra-lifelike robot". Phys.org. Retrieved 2017-08-15. Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies.
  4. ^ Santini, Jean-Louis (2016-02-14). "Intelligent robots threaten millions of jobs". Phys.org. Retrieved 2017-08-15. "We are approaching a time when machines will be able to outperform humans at almost any task," said Moshe Vardi, director of the Institute for Information Technology at Rice University in Texas.
  5. ^ Williams-Grut, Oscar (2016-02-15). "Robots will steal your job: How AI could increase unemployment and inequality". Businessinsider.com. Business Insider. Retrieved 2017-08-15. Top computer scientists in the US warned that the rise of artificial intelligence (AI) and robots in the workplace could cause mass unemployment and dislocated economies, rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports.
  6. ^ . LeanStaff. 2017-10-17. Archived from the original on 2017-10-18. Retrieved 2017-10-17.
  7. ^ Frank, Morgan (2019-03-25). "Toward understanding the impact of artificial intelligence on labor". Proceedings of the National Academy of Sciences of the United States of America. 116 (14): 6531–6539. doi:10.1073/pnas.1900949116. PMC 6452673. PMID 30910965.
  8. ^ Bond, Dave (2017). Artificial Intelligence. pp. 67–69.
  9. ^ Skidelsky, Robert (2013-02-19). "Rise of the robots: what will the future of work look like?". The Guardian. London. Retrieved 14 July 2015.
  10. ^ Bria, Francesca (February 2016). "The robot economy may already have arrived". openDemocracy. Retrieved 20 May 2016.
  11. ^ Srnicek, Nick (March 2016). . novara wire. Archived from the original on 25 June 2016. Retrieved 20 May 2016.
  12. ^ Brynjolfsson, Erik; McAfee, Andrew (2014). "passim, see esp Chpt. 9". The Second Machine Age: Work, Progress, and Prosperity in a Time of Brilliant Technologies. W. W. Norton & Company. ISBN 978-0393239355.
  13. ^ Wakabayashi, Daisuke (March 19, 2018). "Self-Driving Uber Car Kills Pedestrian in Arizona, Where Robots Roam". New York Times.
  14. ^ Hawking, Stephen; Stuart Russell; Max Tegmark; Frank Wilczek (1 May 2014). "Stephen Hawking: 'Transcendence looks at the implications of artificial intelligence - but are we taking AI seriously enough?'". The Independent. from the original on 2015-10-02. Retrieved 1 April 2016.
  15. ^ Müller, Vincent C.; Bostrom, Nick (2016). "Future Progress in Artificial Intelligence: A Survey of Expert Opinion" (PDF). Fundamental Issues of Artificial Intelligence. Springer. pp. 555–572. doi:10.1007/978-3-319-26485-1_33. ISBN 978-3-319-26483-7. AI systems will... reach overall human ability... very likely (with 90% probability) by 2075. From reaching human ability, it will move on to superintelligence within 30 years (75%)... So, (most of the AI experts responding to the surveys) think that superintelligence is likely to come in a few decades...
  16. ^ Bostrom, Nick (2012). "The Superintelligent Will: Motivation and Instrumental Rationality in Advanced Artificial Agents" (PDF). Minds and Machines. Springer. 22 (2): 71–85. doi:10.1007/s11023-012-9281-3.
  17. ^ a b c d e f g h Bostrom, Nick. Superintelligence: Paths, Dangers, Strategies.
  18. ^ "The Origin Of The Word 'Robot'". Science Friday (public radio). 22 April 2011. Retrieved 30 April 2020.
  19. ^ Botkin-Kowacki, Eva (28 October 2016). "A female Frankenstein would lead to humanity's extinction, say scientists". Christian Science Monitor. Retrieved 30 April 2020.
  20. ^ Hockstein, N. G.; Gourin, C. G.; Faust, R. A.; Terris, D. J. (17 March 2007). "A history of robots: from science fiction to surgical robotics". Journal of Robotic Surgery. 1 (2): 113–118. doi:10.1007/s11701-007-0021-2. PMC 4247417. PMID 25484946.
  21. ^ Hellmann, Melissa (21 September 2019). "AI 101: What is artificial intelligence and where is it going?". The Seattle Times. Retrieved 30 April 2020.
  22. ^ Babcock, James; Krámar, János; Yampolskiy, Roman V. (2019). "Guidelines for Artificial Intelligence Containment". Next-Generation Ethics. pp. 90–112. arXiv:1707.08476. doi:10.1017/9781108616188.008. ISBN 9781108616188. S2CID 22007028.
  23. ^ Baraniuk, Chris (23 May 2016). "Checklist of worst-case scenarios could help prepare for evil AI". New Scientist. Retrieved 21 September 2016.
  24. ^ Yudkowsky, Eliezer S. (May 2004). . Singularity Institute for Artificial Intelligence. Archived from the original on 2012-06-15.
  25. ^ Muehlhauser, Luke; Helm, Louie (2012). "Intelligence Explosion and Machine Ethics" (PDF). Singularity Hypotheses: A Scientific and Philosophical Assessment. Springer.
  26. ^ Yudkowsky, Eliezer (2011). "Complex Value Systems in Friendly AI". Artificial General Intelligence. Lecture Notes in Computer Science. Vol. 6830. pp. 388–393. doi:10.1007/978-3-642-22887-2_48. ISBN 978-3-642-22886-5. ISSN 0302-9743.
  27. ^ a b Pinker, Steven (13 February 2018). "We're told to fear robots. But why do we think they'll turn on us?". Popular Science. Retrieved 8 June 2020.
  28. ^ Creating a New Intelligent Species: Choices and Responsibilities for Artificial Intelligence Designers February 6, 2007, at the Wayback Machine - Singularity Institute for Artificial Intelligence, 2005
  29. ^ Omohundro, Stephen M. (June 2008). The basic AI drives (PDF). Artificial General Intelligence 2008. pp. 483–492.
  30. ^ Tucker, Patrick (17 Apr 2014). "Why There Will Be A Robot Uprising". Defense One. Retrieved 15 July 2014.
  31. ^ Russell, Stuart J. (8 October 2019). Human compatible : artificial intelligence and the problem of control. ISBN 978-0-525-55862-0. OCLC 1237420037.
  32. ^ "Google developing kill switch for AI". BBC News. 8 June 2016. Retrieved 7 June 2020.
  33. ^ Rawlinson, Kevin (29 January 2015). "Microsoft's Bill Gates insists AI is a threat". BBC News. Retrieved 30 January 2015.
  34. ^ "The Future of Life Institute Open Letter". The Future of Life Institute. 28 October 2015. Retrieved 29 March 2019.
  35. ^ Bradshaw, Tim (11 January 2015). "Scientists and investors warn on AI". The Financial Times. Retrieved 4 March 2015.
  36. ^ Gabriel, Iason (2020-09-01). "Artificial Intelligence, Values, and Alignment". Minds and Machines. 30 (3): 411–437. doi:10.1007/s11023-020-09539-2. ISSN 1572-8641. S2CID 210920551. Retrieved 2022-07-23.
  37. ^ Russell, Stuart J.; Norvig, Peter (2020). Artificial intelligence: A modern approach (4th ed.). Pearson. pp. 31–34. ISBN 978-1-292-40113-3. OCLC 1303900751.
  38. ^ Langosco, Lauro Langosco Di; Koch, Jack; Sharkey, Lee D; Pfau, Jacob; Krueger, David (2022-07-17). "Goal misgeneralization in deep reinforcement learning". International Conference on Machine Learning. Vol. 162. PMLR. pp. 12004–12019.

External links

  • Automation, not domination: How robots will take over our world (a positive outlook of robot and AI integration into society)
  • Machine Intelligence Research Institute: official MIRI (formerly Singularity Institute for Artificial Intelligence) website
  • Lifeboat Foundation AIShield (To protect against unfriendly AI)
  • Ted talk: Can we build AI without losing control over it?

takeover, hypothetical, scenario, which, artificial, intelligence, becomes, dominant, form, intelligence, earth, computer, programs, robots, effectively, take, control, planet, away, from, human, species, possible, scenarios, include, replacement, entire, huma. An AI takeover is a hypothetical scenario in which an artificial intelligence AI becomes the dominant form of intelligence on Earth as computer programs or robots effectively take the control of the planet away from the human species Possible scenarios include replacement of the entire human workforce takeover by a superintelligent AI and the popular notion of a robot uprising Some public figures such as Stephen Hawking and Elon Musk have advocated research into precautionary measures to ensure future superintelligent machines remain under human control 1 Robots revolt in R U R a 1920 play Contents 1 Types 1 1 Automation of the economy 1 1 1 Technologies that may displace workers 1 1 2 Computer integrated manufacturing 1 1 3 White collar machines 1 1 4 Autonomous cars 1 2 Eradication 2 In fiction 3 Contributing factors 3 1 Advantages of superhuman intelligence over humans 3 1 1 Sources of AI advantage 3 2 Possibility of unfriendly AI preceding friendly AI 3 2 1 Is strong AI inherently dangerous 3 2 2 Odds of conflict 3 2 3 Precautions 4 Warnings 5 Prevention through AI alignment 6 See also 7 Notes 8 References 9 External linksTypes EditAutomation of the economy Edit Main article Technological unemployment The traditional consensus among economists has been that technological progress does not cause long term unemployment However recent innovation in the fields of robotics and artificial intelligence has raised worries that human labor will become obsolete leaving people in various sectors without jobs to earn a living leading to an economic crisis 2 3 4 5 Many small and medium size businesses may also be driven out of business if they will not be able to afford or licence the latest robotic and AI technology and may need to focus on areas or services that cannot easily be replaced for continued viability in the face of such technology 6 Technologies that may displace workers Edit AI technologies have been widely adopted in recent years and this trend will only continue to gain popularity given the digital transformation efforts from companies across the world While these technologies have replaced many traditional workers they also create new opportunities Industries that are most susceptible to experience AI takeover include transportation retail and military AI military technologies for example allow soldiers to work remotely without any risk of injury Author Dave Bond argues that as AI technologies continue to develop and expand the relationship between humans and robots will change they will become closely integrated in several aspects of life Overall it is safe to assume that AI will displace some workers while creating opportunities for new jobs in other sectors especially in fields where tasks are repeatable 7 8 Computer integrated manufacturing Edit See also Artificial intelligence in industry Computer integrated manufacturing is the manufacturing approach of using computers to control the entire production process This integration allows individual processes to exchange information with each other and initiate actions Although manufacturing can be faster and less error prone by the integration of computers the main advantage is the ability to create automated manufacturing processes Computer integrated manufacturing is used in automotive aviation space and ship building industries White collar machines Edit See also White collar worker The 21st century has seen a variety of skilled tasks partially taken over by machines including translation legal research and even low level journalism Care work entertainment and other tasks requiring empathy previously thought safe from automation have also begun to be performed by robots 9 10 11 12 Autonomous cars Edit An autonomous car is a vehicle that is capable of sensing its environment and navigating without human input Many such vehicles are being developed but as of May 2017 automated cars permitted on public roads are not yet fully autonomous They all require a human driver at the wheel who is ready at a moment s notice to take control of the vehicle Among the main obstacles to widespread adoption of autonomous vehicles are concerns about the resulting loss of driving related jobs in the road transport industry On March 18 2018 the first human was killed by an autonomous vehicle in Tempe Arizona by an Uber self driving car 13 Eradication Edit Main article Existential risk from artificial general intelligence Scientists such as Stephen Hawking are confident that superhuman artificial intelligence is physically possible stating there is no physical law precluding particles from being organised in ways that perform even more advanced computations than the arrangements of particles in human brains 14 15 Scholars like Nick Bostrom debate how far off superhuman intelligence is and whether it would actually pose a risk to mankind According to Bostrom a superintelligent machine would not necessarily be motivated by the same emotional desire to collect power that often drives human beings but might rather treat power as a means toward attaining its ultimate goals taking over the world would both increase its access to resources and help to prevent other agents from stopping the machine s plans As an oversimplified example a paperclip maximizer designed solely to create as many paperclips as possible would want to take over the world so that it can use all of the world s resources to create as many paperclips as possible and additionally prevent humans from shutting it down or using those resources on things other than paperclips 16 In fiction EditMain article AI takeovers in popular culture See also Artificial intelligence in fiction and Self replicating machines in fiction AI takeover is a common theme in science fiction Fictional scenarios typically differ vastly from those hypothesized by researchers in that they involve an active conflict between humans and an AI or robots with anthropomorphic motives who see them as a threat or otherwise have active desire to fight humans as opposed to the researchers concern of an AI that rapidly exterminates humans as a byproduct of pursuing arbitrary goals 17 The idea is seen in Karel Capek s R U R which introduced the word robot to the global lexicon in 1921 18 and can even be glimpsed in Mary Shelley s Frankenstein published in 1818 as Victor ponders whether if he grants his monster s request and makes him a wife they would reproduce and their kind would destroy humanity 19 The word robot from R U R comes from the Czech word robota meaning laborer or serf The 1920 play was a protest against the rapid growth of technology featuring manufactured robots with increasing capabilities who eventually revolt 20 HAL 9000 1968 and the original Terminator 1984 are two iconic examples of hostile AI in pop culture 21 Contributing factors EditAdvantages of superhuman intelligence over humans Edit Nick Bostrom and others have expressed concern that an AI with the abilities of a competent artificial intelligence researcher would be able to modify its own source code and increase its own intelligence If its self reprogramming leads to its getting even better at being able to reprogram itself the result could be a recursive intelligence explosion where it would rapidly leave human intelligence far behind Bostrom defines a superintelligence as any intellect that greatly exceeds the cognitive performance of humans in virtually all domains of interest and enumerates some advantages a superintelligence would have if it chose to compete against humans 17 22 Technology research A machine with superhuman scientific research abilities would be able to beat the human research community to milestones such as nanotechnology or advanced biotechnology Strategizing A superintelligence might be able to simply outwit human opposition Social manipulation A superintelligence might be able to recruit human support 17 or covertly incite a war between humans 23 Economic productivity As long as a copy of the AI could produce more economic wealth than the cost of its hardware individual humans would have an incentive to voluntarily allow the Artificial General Intelligence AGI to run a copy of itself on their systems Hacking A superintelligence could find new exploits in computers connected to the Internet and spread copies of itself onto those systems or might steal money to finance its plans Sources of AI advantage Edit According to Bostrom a computer program that faithfully emulates a human brain or that otherwise runs algorithms that are equally powerful as the human brain s algorithms could still become a speed superintelligence if it can think many orders of magnitude faster than a human due to being made of silicon rather than flesh or due to optimization focusing on increasing the speed of the AGI Biological neurons operate at about 200 Hz whereas a modern microprocessor operates at a speed of about 2 000 000 000 Hz Human axons carry action potentials at around 120 m s whereas computer signals travel near the speed of light 17 A network of human level intelligences designed to network together and share complex thoughts and memories seamlessly able to collectively work as a giant unified team without friction or consisting of trillions of human level intelligences would become a collective superintelligence 17 More broadly any number of qualitative improvements to a human level AGI could result in a quality superintelligence perhaps resulting in an AGI as far above us in intelligence as humans are above non human apes The number of neurons in a human brain is limited by cranial volume and metabolic constraints while the number of processors in a supercomputer can be indefinitely expanded An AGI need not be limited by human constraints on working memory and might therefore be able to intuitively grasp more complex relationships than humans can An AGI with specialized cognitive support for engineering or computer programming would have an advantage in these fields compared with humans who evolved no specialized mental modules to specifically deal with those domains Unlike humans an AGI can spawn copies of itself and tinker with its copies source code to attempt to further improve its algorithms 17 Possibility of unfriendly AI preceding friendly AI Edit Is strong AI inherently dangerous Edit Main article AI control problem A significant problem is that unfriendly artificial intelligence is likely to be much easier to create than friendly AI While both require large advances in recursive optimisation process design friendly AI also requires the ability to make goal structures invariant under self improvement or the AI could transform itself into something unfriendly and a goal structure that aligns with human values and does not undergo instrumental convergence in ways that may automatically destroy the entire human race An unfriendly AI on the other hand can optimize for an arbitrary goal structure which does not need to be invariant under self modification 24 The sheer complexity of human value systems makes it very difficult to make AI s motivations human friendly 17 25 Unless moral philosophy provides us with a flawless ethical theory an AI s utility function could allow for many potentially harmful scenarios that conform with a given ethical framework but not common sense According to Eliezer Yudkowsky there is little reason to suppose that an artificially designed mind would have such an adaptation 26 Odds of conflict Edit Many scholars including evolutionary psychologist Steven Pinker argue that a superintelligent machine is likely to coexist peacefully with humans 27 The fear of cybernetic revolt is often based on interpretations of humanity s history which is rife with incidents of enslavement and genocide Such fears stem from a belief that competitiveness and aggression are necessary in any intelligent being s goal system However such human competitiveness stems from the evolutionary background to our intelligence where the survival and reproduction of genes in the face of human and non human competitors was the central goal 28 According to AI researcher Steve Omohundro an arbitrary intelligence could have arbitrary goals there is no particular reason that an artificially intelligent machine not sharing humanity s evolutionary context would be hostile or friendly unless its creator programs it to be such and it is not inclined or capable of modifying its programming But the question remains what would happen if AI systems could interact and evolve evolution in this context means self modification or selection and reproduction and need to compete over resources would that create goals of self preservation AI s goal of self preservation could be in conflict with some goals of humans 29 Many scholars dispute the likelihood of unanticipated cybernetic revolt as depicted in science fiction such as The Matrix arguing that it is more likely that any artificial intelligence powerful enough to threaten humanity would probably be programmed not to attack it Pinker acknowledges the possibility of deliberate bad actors but states that in the absence of bad actors unanticipated accidents are not a significant threat Pinker argues that a culture of engineering safety will prevent AI researchers from accidentally unleashing malign superintelligence 27 In contrast Yudkowsky argues that humanity is less likely to be threatened by deliberately aggressive AIs than by AIs which were programmed such that their goals are unintentionally incompatible with human survival or well being as in the film I Robot and in the short story The Evitable Conflict Omohundro suggests that present day automation systems are not designed for safety and that AIs may blindly optimize narrow utility functions say playing chess at all costs leading them to seek self preservation and elimination of obstacles including humans who might turn them off 30 Precautions Edit Main article AI control problem The AI control problem is the issue of how to build a superintelligent agent that will aid its creators while avoiding inadvertently building a superintelligence that will harm its creators 31 Some scholars argue that solutions to the control problem might also find applications in existing non superintelligent AI 32 Major approaches to the control problem include alignment which aims to align AI goal systems with human values and capability control which aims to reduce an AI system s capacity to harm humans or gain control An example of capability control is to research whether a superintelligence AI could be successfully confined in an AI box According to Bostrom such capability control proposals are not reliable or sufficient to solve the control problem in the long term but may potentially act as valuable supplements to alignment efforts 17 Warnings EditPhysicist Stephen Hawking Microsoft founder Bill Gates and SpaceX founder Elon Musk have expressed concerns about the possibility that AI could develop to the point that humans could not control it with Hawking theorizing that this could spell the end of the human race 33 Stephen Hawking said in 2014 that Success in creating AI would be the biggest event in human history Unfortunately it might also be the last unless we learn how to avoid the risks Hawking believed that in the coming decades AI could offer incalculable benefits and risks such as technology outsmarting financial markets out inventing human researchers out manipulating human leaders and developing weapons we cannot even understand In January 2015 Nick Bostrom joined Stephen Hawking Max Tegmark Elon Musk Lord Martin Rees Jaan Tallinn and numerous AI researchers in signing the Future of Life Institute s open letter speaking to the potential risks and benefits associated with artificial intelligence The signatories believe that research on how to make AI systems robust and beneficial is both important and timely and that there are concrete research directions that can be pursued today 34 35 Prevention through AI alignment EditThis paragraph is an excerpt from AI alignment edit In the field of artificial intelligence AI AI alignment research aims to steer AI systems towards their designers intended goals and interests a An aligned AI system advances the intended objective a misaligned AI system is competent at advancing some objective but not the intended one b See also EditArtificial philosophy Artificial intelligence arms race Autonomous robot Industrial robot Mobile robot Self replicating machine Cyberocracy Effective altruism Existential risk from artificial general intelligence Future of Humanity Institute Global catastrophic risk existential risk Government by algorithm Human extinction Machine ethics Machine learning Deep learning Outline of transhumanism Self replication Technophobia Technological singularity Intelligence explosion Superintelligence Superintelligence Paths Dangers StrategiesNotes Edit Other definitions of AI alignment require that the AI system advances more general goals such as human values other ethical principles or the intentions its designers would have if they were more informed and enlightened 36 See the textbook Russel amp Norvig Artificial Intelligence A Modern Approach 37 The distinction between misaligned AI and incompetent AI has been formalized in certain contexts 38 References Edit Lewis Tanya 2015 01 12 Don t Let Artificial Intelligence Take Over Top Scientists Warn LiveScience Purch Retrieved October 20 2015 Stephen Hawking Elon Musk and dozens of other top scientists and technology leaders have signed a letter warning of the potential dangers of developing artificial intelligence AI Lee Kai Fu 2017 06 24 The Real Threat of Artificial Intelligence The New York Times Retrieved 2017 08 15 These tools can outperform human beings at a given task This kind of A I is spreading to thousands of domains and as it does it will eliminate many jobs Larson Nina 2017 06 08 AI good for the world says ultra lifelike robot Phys org Retrieved 2017 08 15 Among the feared consequences of the rise of the robots is the growing impact they will have on human jobs and economies Santini Jean Louis 2016 02 14 Intelligent robots threaten millions of jobs Phys org Retrieved 2017 08 15 We are approaching a time when machines will be able to outperform humans at almost any task said Moshe Vardi director of the Institute for Information Technology at Rice University in Texas Williams Grut Oscar 2016 02 15 Robots will steal your job How AI could increase unemployment and inequality Businessinsider com Business Insider Retrieved 2017 08 15 Top computer scientists in the US warned that the rise of artificial intelligence AI and robots in the workplace could cause mass unemployment and dislocated economies rather than simply unlocking productivity gains and freeing us all up to watch TV and play sports How can SMEs prepare for the rise of the robots LeanStaff 2017 10 17 Archived from the original on 2017 10 18 Retrieved 2017 10 17 Frank Morgan 2019 03 25 Toward understanding the impact of artificial intelligence on labor Proceedings of the National Academy of Sciences of the United States of America 116 14 6531 6539 doi 10 1073 pnas 1900949116 PMC 6452673 PMID 30910965 Bond Dave 2017 Artificial Intelligence pp 67 69 Skidelsky Robert 2013 02 19 Rise of the robots what will the future of work look like The Guardian London Retrieved 14 July 2015 Bria Francesca February 2016 The robot economy may already have arrived openDemocracy Retrieved 20 May 2016 Srnicek Nick March 2016 4 Reasons Why Technological Unemployment Might Really Be Different This Time novara wire Archived from the original on 25 June 2016 Retrieved 20 May 2016 Brynjolfsson Erik McAfee Andrew 2014 passim see esp Chpt 9 The Second Machine Age Work Progress and Prosperity in a Time of Brilliant Technologies W W Norton amp Company ISBN 978 0393239355 Wakabayashi Daisuke March 19 2018 Self Driving Uber Car Kills Pedestrian in Arizona Where Robots Roam New York Times Hawking Stephen Stuart Russell Max Tegmark Frank Wilczek 1 May 2014 Stephen Hawking Transcendence looks at the implications of artificial intelligence but are we taking AI seriously enough The Independent Archived from the original on 2015 10 02 Retrieved 1 April 2016 Muller Vincent C Bostrom Nick 2016 Future Progress in Artificial Intelligence A Survey of Expert Opinion PDF Fundamental Issues of Artificial Intelligence Springer pp 555 572 doi 10 1007 978 3 319 26485 1 33 ISBN 978 3 319 26483 7 AI systems will reach overall human ability very likely with 90 probability by 2075 From reaching human ability it will move on to superintelligence within 30 years 75 So most of the AI experts responding to the surveys think that superintelligence is likely to come in a few decades Bostrom Nick 2012 The Superintelligent Will Motivation and Instrumental Rationality in Advanced Artificial Agents PDF Minds and Machines Springer 22 2 71 85 doi 10 1007 s11023 012 9281 3 a b c d e f g h Bostrom Nick Superintelligence Paths Dangers Strategies The Origin Of The Word Robot Science Friday public radio 22 April 2011 Retrieved 30 April 2020 Botkin Kowacki Eva 28 October 2016 A female Frankenstein would lead to humanity s extinction say scientists Christian Science Monitor Retrieved 30 April 2020 Hockstein N G Gourin C G Faust R A Terris D J 17 March 2007 A history of robots from science fiction to surgical robotics Journal of Robotic Surgery 1 2 113 118 doi 10 1007 s11701 007 0021 2 PMC 4247417 PMID 25484946 Hellmann Melissa 21 September 2019 AI 101 What is artificial intelligence and where is it going The Seattle Times Retrieved 30 April 2020 Babcock James Kramar Janos Yampolskiy Roman V 2019 Guidelines for Artificial Intelligence Containment Next Generation Ethics pp 90 112 arXiv 1707 08476 doi 10 1017 9781108616188 008 ISBN 9781108616188 S2CID 22007028 Baraniuk Chris 23 May 2016 Checklist of worst case scenarios could help prepare for evil AI New Scientist Retrieved 21 September 2016 Yudkowsky Eliezer S May 2004 Coherent Extrapolated Volition Singularity Institute for Artificial Intelligence Archived from the original on 2012 06 15 Muehlhauser Luke Helm Louie 2012 Intelligence Explosion and Machine Ethics PDF Singularity Hypotheses A Scientific and Philosophical Assessment Springer Yudkowsky Eliezer 2011 Complex Value Systems in Friendly AI Artificial General Intelligence Lecture Notes in Computer Science Vol 6830 pp 388 393 doi 10 1007 978 3 642 22887 2 48 ISBN 978 3 642 22886 5 ISSN 0302 9743 a b Pinker Steven 13 February 2018 We re told to fear robots But why do we think they ll turn on us Popular Science Retrieved 8 June 2020 Creating a New Intelligent Species Choices and Responsibilities for Artificial Intelligence Designers Archived February 6 2007 at the Wayback Machine Singularity Institute for Artificial Intelligence 2005 Omohundro Stephen M June 2008 The basic AI drives PDF Artificial General Intelligence 2008 pp 483 492 Tucker Patrick 17 Apr 2014 Why There Will Be A Robot Uprising Defense One Retrieved 15 July 2014 Russell Stuart J 8 October 2019 Human compatible artificial intelligence and the problem of control ISBN 978 0 525 55862 0 OCLC 1237420037 Google developing kill switch for AI BBC News 8 June 2016 Retrieved 7 June 2020 Rawlinson Kevin 29 January 2015 Microsoft s Bill Gates insists AI is a threat BBC News Retrieved 30 January 2015 The Future of Life Institute Open Letter The Future of Life Institute 28 October 2015 Retrieved 29 March 2019 Bradshaw Tim 11 January 2015 Scientists and investors warn on AI The Financial Times Retrieved 4 March 2015 Gabriel Iason 2020 09 01 Artificial Intelligence Values and Alignment Minds and Machines 30 3 411 437 doi 10 1007 s11023 020 09539 2 ISSN 1572 8641 S2CID 210920551 Retrieved 2022 07 23 Russell Stuart J Norvig Peter 2020 Artificial intelligence A modern approach 4th ed Pearson pp 31 34 ISBN 978 1 292 40113 3 OCLC 1303900751 Langosco Lauro Langosco Di Koch Jack Sharkey Lee D Pfau Jacob Krueger David 2022 07 17 Goal misgeneralization in deep reinforcement learning International Conference on Machine Learning Vol 162 PMLR pp 12004 12019 External links EditAutomation not domination How robots will take over our world a positive outlook of robot and AI integration into society Machine Intelligence Research Institute official MIRI formerly Singularity Institute for Artificial Intelligence website Lifeboat Foundation AIShield To protect against unfriendly AI Ted talk Can we build AI without losing control over it Retrieved from https en wikipedia org w index php title AI takeover amp oldid 1134653922, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.