fbpx
Wikipedia

History of artificial intelligence

The history of artificial intelligence (AI) began in antiquity, with myths, stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen. The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols. This work culminated in the invention of the programmable digital computer in the 1940s, a machine based on the abstract essence of mathematical reasoning. This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain.

Alan Turing was the first person to carry out substantial research in the field that he called Machine Intelligence.[1] The field of AI research was founded at a workshop held on the campus of Dartmouth College, USA during the summer of 1956.[2] Those who attended would become the leaders of AI research for decades. Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation, and they were given millions of dollars to make this vision come true.[3]

Eventually, it became obvious that researchers had grossly underestimated the difficulty of the project.[4] In 1974, in response to the criticism from James Lighthill and ongoing pressure from congress, the U.S. and British Governments stopped funding undirected research into artificial intelligence, and the difficult years that followed would later be known as an "AI winter". Seven years later, a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars, but by the late 1980s the investors became disillusioned and withdrew funding again.

Investment and interest in AI boomed in the 2020s when machine learning was successfully applied to many problems in academia and industry due to new methods, the application of powerful computer hardware, and the collection of immense data sets.

Precursors edit

Mythical, fictional, and speculative precursors edit

Myth and legend edit

In Greek mythology, Talos was a giant constructed of bronze who acted as guardian for the island of Crete. He would throw boulders at the ships of invaders and would complete 3 circuits around the island's perimeter daily.[5] According to pseudo-Apollodorus' Bibliotheke, Hephaestus forged Talos with the aid of a cyclops and presented the automaton as a gift to Minos.[6] In the Argonautica, Jason and the Argonauts defeated him by way of a single plug near his foot which, once removed, allowed the vital ichor to flow out from his body and left him inanimate.[7]

Pygmalion was a legendary king and sculptor of Greek mythology, famously represented in Ovid's Metamorphoses. In the 10th book of Ovid's narrative poem, Pygmalion becomes disgusted with women when he witnesses the way in which the Propoetides prostitute themselves.[8] Despite this, he makes offerings at the temple of Venus asking the goddess to bring to him a woman just like a statue he carved.

Medieval legends of artificial beings edit

 
Depiction of a homunculus from Goethe's Faust

In Of the Nature of Things, written by the Swiss alchemist, Paracelsus, he describes a procedure that he claims can fabricate an "artificial man". By placing the "sperm of a man" in horse dung, and feeding it the "Arcanum of Mans blood" after 40 days, the concoction will become a living infant.[9]

The earliest written account regarding golem-making is found in the writings of Eleazar ben Judah of Worms in the early 13th century.[10][11] During the Middle Ages, it was believed that the animation of a Golem could be achieved by insertion of a piece of paper with any of God’s names on it, into the mouth of the clay figure.[12] Unlike legendary automata like Brazen Heads,[13] a Golem was unable to speak.[14]

Takwin, the artificial creation of life, was a frequent topic of Ismaili alchemical manuscripts, especially those attributed to Jabir ibn Hayyan. Islamic alchemists attempted to create a broad range of life through their work, ranging from plants to animals.[15]

In Faust: The Second Part of the Tragedy by Johann Wolfgang von Goethe, an alchemically fabricated homunculus, destined to live forever in the flask in which he was made, endeavors to be born into a full human body. Upon the initiation of this transformation, however, the flask shatters and the homunculus dies.[16]

Modern fiction edit

By the 19th century, ideas about artificial men and thinking machines were developed in fiction, as in Mary Shelley's Frankenstein or Karel Čapek's R.U.R. (Rossum's Universal Robots),[17] and speculation, such as Samuel Butler's "Darwin among the Machines",[18] and in real-world instances, including Edgar Allan Poe's "Maelzel's Chess Player".[19] AI is common topic in science fiction through the present.[20]

Automata edit

 
Al-Jazari's programmable automata (1206 CE)

Realistic humanoid automata were built by craftsman from every civilization, including Yan Shi,[21] Hero of Alexandria,[22] Al-Jazari,[23]Pierre Jaquet-Droz, and Wolfgang von Kempelen.[24][25]

The oldest known automata were the sacred statues of ancient Egypt and Greece.[26] The faithful believed that craftsman had imbued these figures with very real minds, capable of wisdom and emotion—Hermes Trismegistus wrote that "by discovering the true nature of the gods, man has been able to reproduce it".[27][28] English scholar Alexander Neckham asserted that the Ancient Roman poet Virgil had built a palace with automaton statues.[29]

During the early modern period, these legendary automata were said to possess the magical ability to answer questions put to them. The late medieval alchemist and proto-protestant Roger Bacon was purported to have fabricated a brazen head, having developed a legend of having been a wizard.[30][31] These legends were similar to the Norse myth of the Head of Mímir. According to legend, Mímir was known for his intellect and wisdom, and was beheaded in the Æsir-Vanir War. Odin is said to have "embalmed" the head with herbs and spoke incantations over it such that Mímir’s head remained able to speak wisdom to Odin. Odin then kept the head near him for counsel.[32]

Formal reasoning edit

Artificial intelligence is based on the assumption that the process of human thought can be mechanized. The study of mechanical—or "formal"—reasoning has a long history. Chinese, Indian, and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE. Their ideas were developed over the centuries by philosophers such as Aristotle (who gave a formal analysis of the syllogism), Euclid (whose Elements was a model of formal reasoning), al-Khwārizmī (who developed algebra and gave his name to "algorithm") and European scholastic philosophers such as William of Ockham and Duns Scotus.[33]

Spanish philosopher Ramon Llull (1232–1315) developed several logical machines devoted to the production of knowledge by logical means;[34] Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations, produced by the machine by mechanical meanings, in such ways as to produce all the possible knowledge.[35] Llull's work had a great influence on Gottfried Leibniz, who redeveloped his ideas.[36]

 
Gottfried Leibniz, who speculated that human reason could be reduced to mechanical calculation

In the 17th century, Leibniz, Thomas Hobbes and René Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry.[37] Hobbes famously wrote in Leviathan: "reason is nothing but reckoning".[38] Leibniz envisioned a universal language of reasoning, the characteristica universalis, which would reduce argumentation to calculation so that "there would be no more need of disputation between two philosophers than between two accountants. For it would suffice to take their pencils in hand, down to their slates, and to say each other (with a friend as witness, if they liked): Let us calculate."[39] These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research.

In the 20th century, the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible. The foundations had been set by such works as Boole's The Laws of Thought and Frege's Begriffsschrift. Building on Frege's system, Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece, the Principia Mathematica in 1913. Inspired by Russell's success, David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question: "can all of mathematical reasoning be formalized?"[33] His question was answered by Gödel's incompleteness proof, Turing's machine and Church's Lambda calculus.[33][40]

 
US Army photo of the ENIAC at the Moore School of Electrical Engineering[41]

Their answer was surprising in two ways. First, they proved that there were, in fact, limits to what mathematical logic could accomplish. But second (and more important for AI) their work suggested that, within these limits, any form of mathematical reasoning could be mechanized. The Church-Turing thesis implied that a mechanical device, shuffling symbols as simple as 0 and 1, could imitate any conceivable process of mathematical deduction. The key insight was the Turing machine—a simple theoretical construct that captured the essence of abstract symbol manipulation.[42] This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines.[33][43]

Computer science edit

Calculating machines were designed or built in antiquity and throughout history by many people, including Gottfried Leibniz,[44]Joseph Marie Jacquard,[45]Charles Babbage,[46]Percy Ludgate,[47]Leonardo Torres Quevedo,[48]Vannevar Bush,[49] and others. Ada Lovelace speculated that Babbage's machine was "a thinking or ... reasoning machine", but warned "It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers" of the machine.[50][51]

The first modern computers were the massive machines of the Second World War (such as Konrad Zuse's Z3, Alan Turing's Heath Robinson and Colossus, Atanasoff and Berry's and ABC and ENIAC at the University of Pennsylvania).[52] ENIAC was based on the theoretical foundation laid by Alan Turing and developed by John von Neumann,[53] and proved to be the most influential.[52]

Birth of Machine Intelligence (Before 1956) edit

 
The IBM 702: a computer used by the first generation of AI researchers.

In the 1940s and 50s, a handful of scientists from a variety of fields (mathematics, psychology, engineering, economics and political science) began to discuss the possibility of creating an artificial brain. Alan Turing was the first person to carry out substantial research in the field that he called Machine Intelligence.[1]The field of artificial intelligence research was founded as an academic discipline in 1956.[54]

Cybernetics and early neural networks edit

The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s, 1940s, and early 1950s. Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all-or-nothing pulses. Norbert Wiener's cybernetics described control and stability in electrical networks. Claude Shannon's information theory described digital signals (i.e., all-or-nothing signals). Alan Turing's theory of computation showed that any form of computation could be described digitally. The close relationship between these ideas suggested that it might be possible to construct an "electronic brain".[55] Experimental robots such as W. Grey Walter's turtles and the Johns Hopkins Beast, were built in the 1950s. These machines did not use computers, digital electronics or symbolic reasoning; they were controlled entirely by analog circuitry.[56]

Alan Turing was thinking about machine intelligence at least as early as 1941, when he circulated a paper on machine intelligence which could be the earliest paper in the field of AI - though it is now lost.[1] Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions in 1943.[57][58] They were the first to describe what later researchers would call a neural network.[59] The paper was influenced by Turing's earlier paper 'On Computable Numbers' from 1936 using similar two-state boolean 'neurons', but was the first to apply it to neuronal function.[1] One of the students inspired by Pitts and McCulloch was a young Marvin Minsky, then a 24-year-old graduate student. In 1951 (with Dean Edmonds) he built the first neural net machine, the SNARC.[60] Minsky was to become one of the most important leaders and innovators in AI.

Turing Test edit

The term 'Machine Intelligence' was used by Alan Turing during his life which was later often referred to as 'Artificial Intelligence' after his death in 1954. In 1950 Turing published a landmark paper and the best known of his papers 'Computing Machinery and Intelligence', in which he speculated about the possibility of creating machines that think and the paper introduced his concept of what is now known as the Turing test to the general public.[61] He noted that "thinking" is difficult to define and devised his famous Turing Test.[62] If a machine could carry on a conversation (over a teleprinter) that was indistinguishable from a conversation with a human being, then it was reasonable to say that the machine was "thinking". This simplified version of the problem allowed Turing to argue convincingly that a "thinking machine" was at least plausible and the paper answered all the most common objections to the proposition.[63] The Turing Test was the first serious proposal in the philosophy of artificial intelligence. Then followed three radio broadcasts on AI by Turing, the lectures: 'Intelligent Machinery, A Heretical Theory’, ‘Can Digital Computers Think’? and the panel discussion ‘Can Automatic Calculating Machines be Said to Think’. By 1956 computer intelligence had been actively pursued for more than a decade in Britain; the earliest AI programmes were written there in 1951–52.[1]

Game AI edit

In 1951, using the Ferranti Mark 1 machine of the University of Manchester, Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess.[64] Arthur Samuel's checkers program, the subject of his 1959 paper "Some Studies in Machine Learning Using the Game of Checkers", eventually achieved sufficient skill to challenge a respectable amateur.[65] Game AI would continue to be used as a measure of progress in AI throughout its history.

Symbolic reasoning and the Logic Theorist edit

When access to digital computers became possible in the middle fifties, a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought. This was a new approach to creating thinking machines.[66]

In 1955, Allen Newell and (future Nobel Laureate) Herbert A. Simon created the "Logic Theorist" (with help from J. C. Shaw). The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead's Principia Mathematica, and find new and more elegant proofs for some.[67] Simon said that they had "solved the venerable mind/body problem, explaining how a system composed of matter can have the properties of mind."[68] (This was an early statement of the philosophical position John Searle would later call "Strong AI": that machines can contain minds just as human bodies do.)[69]

Birth of Artificial Intelligence (1956–1974) edit

The term "Artificial Intelligence" itself was officially introduced by John McCarthy in 1956 during the Dartmouth Workshop, a pivotal event that marked the formal inception of AI as an academic discipline. The primary objective of this workshop was to delve into the possibilities of creating machines capable of simulating human intelligence, marking the commencement of a focused exploration into the realm of AI.[70]

The Dartmouth workshop of 1956[71] was organized by Marvin Minsky, John McCarthy and two senior scientists: Claude Shannon and Nathan Rochester of IBM. The proposal for the conference included this assertion: "every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it".[72] The participants included Ray Solomonoff, Oliver Selfridge, Trenchard More, Arthur Samuel, Allen Newell and Herbert A. Simon, all of whom would create important programs during the first decades of AI research.[73] At the workshop Newell and Simon debuted the "Logic Theorist" and McCarthy persuaded the attendees to accept "Artificial Intelligence" as the name of the field.[74] (The term "Artificial Intelligence" was chosen by McCarthy to avoid associations with cybernetics and the influence of Norbert Wiener.)[75] The 1956 Dartmouth workshop was the moment that AI gained its name, its mission, its first success and its major players, and is widely considered the birth of AI.[76]

The programs developed in the years after the Dartmouth Workshop were, to most people, simply "astonishing":[77] computers were solving algebra word problems, proving theorems in geometry and learning to speak English. Few at the time would have believed that such "intelligent" behavior by machines was possible at all.[78] Researchers expressed an intense optimism in private and in print, predicting that a fully intelligent machine would be built in less than 20 years.[79] Government agencies like DARPA poured money into the new field.[80] Artificial Intelligence laboratories were set up at a number of British and US Universities in the latter 1950s and early 1960s.[1]

Approaches edit

There were many successful programs and new directions in the late 50s and 1960s. Among the most influential were these:

Reasoning as search edit

Many early AI programs used the same basic algorithm. To achieve some goal (like winning a game or proving a theorem), they proceeded step by step towards it (by making a move or a deduction) as if searching through a maze, backtracking whenever they reached a dead end. This paradigm was called "reasoning as search".[81]

The principal difficulty was that, for many problems, the number of possible paths through the "maze" was simply astronomical (a situation known as a "combinatorial explosion"). Researchers would reduce the search space by using heuristics or "rules of thumb" that would eliminate those paths that were unlikely to lead to a solution.[82]

Newell and Simon tried to capture a general version of this algorithm in a program called the "General Problem Solver".[83] Other "searching" programs were able to accomplish impressive tasks like solving problems in geometry and algebra, such as Herbert Gelernter's Geometry Theorem Prover (1958) and Symbolic Automatic Integrator (SAINT), written by Minsky's student James Slagle (1961).[84] Other programs searched through goals and subgoals to plan actions, like the STRIPS system developed at Stanford to control the behavior of their robot Shakey.[85]

Neural networks edit

The McCulloch and Pitts paper (1944) inspired approaches to creating computing hardware that realizes the neural network approach to AI in hardware. The most influential was the effort led by Frank Rosenblatt on building Perceptron machines (1957-1962) of up to four layers. He was primarily funded by Office of Naval Research.[86] Bernard Widrow and his student Ted Hoff built ADALINE (1960) and MADALINE (1962), which had up to 1000 adjustable weights.[87] A group at Stanford Research Institute led by Charles A. Rosen and Alfred E. (Ted) Brain built two neural network machines named MINOS I (1960) and II (1963), mainly funded by U.S. Army Signal Corps. MINOS II[88] had 6600 adjustable weights,[89] and was controlled with an SDS 910 computer in a configuration named MINOS III (1968), which could classify symbols on army maps, and recognize hand-printed characters on Fortran coding sheets.[90][91][92]

Most of neural network research during this early period involved building and using bespoke hardware, rather than simulation on digital computers. The hardware diversity was particularly clear in the different technologies used in implementing the adjustable weights. The perceptron machines and the SNARC used potentiometers moved by electric motors. ADALINE used memistors adjusted by electroplating, though they also used simulations on an IBM 1620. The MINOS machines used ferrite cores with multiple holes in them that could be individually blocked, with the degree of blockage representing the weights.[93]

Though there were multi-layered neural networks, most neural networks during this period had only one layer of adjustable weights. There were empirical attempts at training more than a single layer, but they were unsuccessful. Backpropagation did not become prevalent for neural network training until the 1980s.[93]

 
An example of a semantic network

Natural language edit

An important goal of AI research is to allow computers to communicate in natural languages like English. An early success was Daniel Bobrow's program STUDENT, which could solve high school algebra word problems.[94]

A semantic net represents concepts (e.g. "house", "door") as nodes and relations among concepts (e.g. "has-a") as links between the nodes. The first AI program to use a semantic net was written by Ross Quillian[95] and the most successful (and controversial) version was Roger Schank's Conceptual dependency theory.[96]

Joseph Weizenbaum's ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program (See ELIZA effect). But in fact, ELIZA had no idea what she was talking about. She simply gave a canned response or repeated back what was said to her, rephrasing her response with a few grammar rules. ELIZA was the first chatterbot.[97]

Micro-worlds edit

In the late 60s, Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro-worlds. They pointed out that in successful sciences like physics, basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies. Much of the research focused on a "blocks world," which consists of colored blocks of various shapes and sizes arrayed on a flat surface.[98]

This paradigm led to innovative work in machine vision by Gerald Sussman (who led the team), Adolfo Guzman, David Waltz (who invented "constraint propagation"), and especially Patrick Winston. At the same time, Minsky and Papert built a robot arm that could stack blocks, bringing the blocks world to life. The crowning achievement of the micro-world program was Terry Winograd's SHRDLU. It could communicate in ordinary English sentences, plan operations and execute them.[99]

Automata edit

In Japan, Waseda University initiated the WABOT project in 1967, and in 1972 completed the WABOT-1, the world's first full-scale "intelligent" humanoid robot,[100][101] or android. Its limb control system allowed it to walk with the lower limbs, and to grip and transport objects with hands, using tactile sensors. Its vision system allowed it to measure distances and directions to objects using external receptors, artificial eyes and ears. And its conversation system allowed it to communicate with a person in Japanese, with an artificial mouth.[102][103][104]

Optimism edit

The first generation of AI researchers made these predictions about their work:

  • 1958, H. A. Simon and Allen Newell: "within ten years a digital computer will be the world's chess champion" and "within ten years a digital computer will discover and prove an important new mathematical theorem."[105]
  • 1965, H. A. Simon: "machines will be capable, within twenty years, of doing any work a man can do."[106]
  • 1967, Marvin Minsky: "Within a generation ... the problem of creating 'artificial intelligence' will substantially be solved."[107]
  • 1970, Marvin Minsky (in Life Magazine): "In from three to eight years we will have a machine with the general intelligence of an average human being."[108]

Financing edit

In June 1963, MIT received a $2.2 million grant from the newly created Advanced Research Projects Agency (later known as DARPA). The money was used to fund project MAC which subsumed the "AI Group" founded by Minsky and McCarthy five years earlier. DARPA continued to provide three million dollars a year until the 70s.[109]DARPA made similar grants to Newell and Simon's program at CMU and to the Stanford AI Project (founded by John McCarthy in 1963).[110] Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965.[111] These four institutions would continue to be the main centers of AI research (and funding) in academia for many years.[112]

The money was proffered with few strings attached: J. C. R. Licklider, then the director of ARPA, believed that his organization should "fund people, not projects!" and allowed researchers to pursue whatever directions might interest them.[113] This created a freewheeling atmosphere at MIT that gave birth to the hacker culture,[114] but this "hands off" approach would not last.

First AI winter (1974–1980) edit

In the 1970s, AI was subject to critiques and financial setbacks. AI researchers had failed to appreciate the difficulty of the problems they faced. Their tremendous optimism had raised expectations impossibly high, and when the promised results failed to materialize, funding for AI disappeared.[115] At the same time, the exploration of simple, single-layer artificial neural networks was shut down almost completely for a decade partially due to Marvin Minsky's book emphasizing the limits of what perceptrons can do.[116] Despite the difficulties with public perception of AI in the late 70s, new ideas were explored in logic programming, commonsense reasoning and many other areas.[117]

Problems edit

In the early seventies, the capabilities of AI programs were limited. Even the most impressive could only handle trivial versions of the problems they were supposed to solve; all the programs were, in some sense, "toys".[118] AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s. Although some of these limits would be conquered in later decades, others still stymie the field to this day.[119]

  • Limited computer power: There was not enough memory or processing speed to accomplish anything truly useful. For example, Ross Quillian's successful work on natural language was demonstrated with a vocabulary of only twenty words, because that was all that would fit in memory.[120] Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence. He suggested an analogy: artificial intelligence requires computer power in the same way that aircraft require horsepower. Below a certain threshold, it's impossible, but, as power increases, eventually it could become easy.[121] With regard to computer vision, Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general-purpose computer capable of 109 operations/second (1000 MIPS).[122] As of 2011, practical computer vision applications require 10,000 to 1,000,000 MIPS. By comparison, the fastest supercomputer in 1976, Cray-1 (retailing at $5 million to $8 million), was only capable of around 80 to 130 MIPS, and a typical desktop computer at the time achieved less than 1 MIPS.
  • Intractability and the combinatorial explosion. In 1972 Richard Karp (building on Stephen Cook's 1971 theorem) showed there are many problems that can probably only be solved in exponential time (in the size of the inputs). Finding optimal solutions to these problems requires unimaginable amounts of computer time except when the problems are trivial. This almost certainly meant that many of the "toy" solutions used by AI would probably never scale up into useful systems.[123]
  • Commonsense knowledge and reasoning. Many important artificial intelligence applications like vision or natural language require simply enormous amounts of information about the world: the program needs to have some idea of what it might be looking at or what it is talking about. This requires that the program know most of the same things about the world that a child does. Researchers soon discovered that this was a truly vast amount of information. No one in 1970 could build a database so large and no one knew how a program might learn so much information.[124]
  • Moravec's paradox: Proving theorems and solving geometry problems is comparatively easy for computers, but a supposedly simple task like recognizing a face or crossing a room without bumping into anything is extremely difficult. This helps explain why research into vision and robotics had made so little progress by the middle 1970s.[125]
  • The frame and qualification problems. AI researchers (like John McCarthy) who used logic discovered that they could not represent ordinary deductions that involved planning or default reasoning without making changes to the structure of logic itself. They developed new logics (like non-monotonic logics and modal logics) to try to solve the problems.[126]

End of funding edit

The agencies which funded AI research (such as the British government, DARPA and NRC) became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI. The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts. After spending 20 million dollars, the NRC ended all support.[127] In 1973, the Lighthill report on the state of AI research in the UK criticized the utter failure of AI to achieve its "grandiose objectives" and led to the dismantling of AI research in that country.[128] (The report specifically mentioned the combinatorial explosion problem as a reason for AI's failings.)[129]DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars.[130] By 1974, funding for AI projects was hard to find.

The end of funding occurred even earlier for neural network research, partly due to lack of results and partly due to competition from symbolic AI research. The MINOS project ran out of funding in 1966. Rosenblatt failed to secure continued funding in the 1960s.[93]

Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues. "Many researchers were caught up in a web of increasing exaggeration."[131] However, there was another issue: since the passage of the Mansfield Amendment in 1969, DARPA had been under increasing pressure to fund "mission-oriented direct research, rather than basic undirected research". Funding for the creative, freewheeling exploration that had gone on in the 60s would not come from DARPA. Instead, the money was directed at specific projects with clear objectives, such as autonomous tanks and battle management systems.[132]

Critiques from across campus edit

Several philosophers had strong objections to the claims being made by AI researchers. One of the earliest was John Lucas, who argued that Gödel's incompleteness theorem showed that a formal system (such as a computer program) could never see the truth of certain statements, while a human being could.[133] Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI, arguing that human reasoning actually involved very little "symbol processing" and a great deal of embodied, instinctive, unconscious "know how".[134][135] John Searle's Chinese Room argument, presented in 1980, attempted to show that a program could not be said to "understand" the symbols that it uses (a quality called "intentionality"). If the symbols have no meaning for the machine, Searle argued, then the machine can not be described as "thinking".[136]

These critiques were not taken seriously by AI researchers, often because they seemed so far off the point. Problems like intractability and commonsense knowledge seemed much more immediate and serious. It was unclear what difference "know how" or "intentionality" made to an actual computer program. Minsky said of Dreyfus and Searle "they misunderstand, and should be ignored."[137] Dreyfus, who taught at MIT, was given a cold shoulder: he later said that AI researchers "dared not be seen having lunch with me."[138] Joseph Weizenbaum, the author of ELIZA, felt his colleagues' treatment of Dreyfus was unprofessional and childish.[139] Although he was an outspoken critic of Dreyfus' positions, he "deliberately made it plain that theirs was not the way to treat a human being."[140]

Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a "computer program which can conduct psychotherapeutic dialogue" based on ELIZA.[141] Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool. A feud began, and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program. In 1976, Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life.[142]

Perceptrons and the attack on connectionism edit

A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt, who had been a schoolmate of Marvin Minsky at the Bronx High School of Science. Like most AI researchers, he was optimistic about their power, predicting that "perceptron may eventually be able to learn, make decisions, and translate languages." An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert's 1969 book Perceptrons. It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt's predictions had been grossly exaggerated. The effect of the book was devastating: virtually no research at all was funded in connectionism for 10 years.

Of the main efforts towards neural networks, Rosenblatt attempted to gather funds for building larger perceptron machines, but died in a boating accident in 1971. Minsky (of SNARC) turned to a staunch objector to pure connectionist AI. Widrow (of ADALINE) turned to adaptive signal processing, using techniques based on the LMS algorithm. The SRI group (of MINOS) turned to symbolic AI and robotics. The main issues were lack of funding and the inability to train multilayered networks (backpropagation was unknown). The competition for government funding ended with the victory of symbolic AI approaches.[92][93]

Logic at Stanford, CMU and Edinburgh edit

Logic was introduced into AI research as early as 1959, by John McCarthy in his Advice Taker proposal.[143] In 1963, J. Alan Robinson had discovered a simple method to implement deduction on computers, the resolution and unification algorithm. However, straightforward implementations, like those attempted by McCarthy and his students in the late 1960s, were especially intractable: the programs required astronomical numbers of steps to prove simple theorems.[144] A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh, and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog.[145] Prolog uses a subset of logic (Horn clauses, closely related to "rules" and "production rules") that permit tractable computation. Rules would continue to be influential, providing a foundation for Edward Feigenbaum's expert systems and the continuing work by Allen Newell and Herbert A. Simon that would lead to Soar and their unified theories of cognition.[146]

Critics of the logical approach noted, as Dreyfus had, that human beings rarely used logic when they solved problems. Experiments by psychologists like Peter Wason, Eleanor Rosch, Amos Tversky, Daniel Kahneman and others provided proof.[147] McCarthy responded that what people do is irrelevant. He argued that what is really needed are machines that can solve problems—not machines that think as people do.[148]

MIT's "anti-logic" approach edit

Among the critics of McCarthy's approach were his colleagues across the country at MIT. Marvin Minsky, Seymour Papert and Roger Schank were trying to solve problems like "story understanding" and "object recognition" that required a machine to think like a person. In order to use ordinary concepts like "chair" or "restaurant" they had to make all the same illogical assumptions that people normally made. Unfortunately, imprecise concepts like these are hard to represent in logic. Gerald Sussman observed that "using precise language to describe essentially imprecise concepts doesn't make them any more precise."[149] Schank described their "anti-logic" approaches as "scruffy", as opposed to the "neat" paradigms used by McCarthy, Kowalski, Feigenbaum, Newell and Simon.[150]

In 1975, in a seminal paper, Minsky noted that many of his fellow researchers were using the same kind of tool: a framework that captures all our common sense assumptions about something. For example, if we use the concept of a bird, there is a constellation of facts that immediately come to mind: we might assume that it flies, eats worms and so on. We know these facts are not always true and that deductions using these facts will not be "logical", but these structured sets of assumptions are part of the context of everything we say and think. He called these structures "frames". Schank used a version of frames he called "scripts" to successfully answer questions about short stories in English.[151]

Boom (1980–1987) edit

In the 1980s a form of AI program called "expert systems" was adopted by corporations around the world and knowledge became the focus of mainstream AI research. In those same years, the Japanese government aggressively funded AI with its fifth generation computer project. Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart. Once again, AI had achieved success.[152]

Rise of expert systems edit

An expert system is a program that answers questions or solves problems about a specific domain of knowledge, using logical rules that are derived from the knowledge of experts. The earliest examples were developed by Edward Feigenbaum and his students. Dendral, begun in 1965, identified compounds from spectrometer readings. MYCIN, developed in 1972, diagnosed infectious blood diseases. They demonstrated the feasibility of the approach.[153]

Expert systems restricted themselves to a small domain of specific knowledge (thus avoiding the commonsense knowledge problem) and their simple design made it relatively easy for programs to be built and then modified once they were in place. All in all, the programs proved to be useful: something that AI had not been able to achieve up to this point.[154]

In 1980, an expert system called XCON was completed at CMU for the Digital Equipment Corporation. It was an enormous success: it was saving the company 40 million dollars annually by 1986.[155] Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI, most of it to in-house AI departments.[156] An industry grew up to support them, including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion.[157]

Knowledge revolution edit

The power of expert systems came from the expert knowledge they contained. They were part of a new direction in AI research that had been gaining ground throughout the 70s. "AI researchers were beginning to suspect—reluctantly, for it violated the scientific canon of parsimony—that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways,"[158] writes Pamela McCorduck. "[T]he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge, sometimes quite detailed knowledge, of a domain where a given task lay".[159] Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s.[160]

The 1980s also saw the birth of Cyc, the first attempt to attack the commonsense knowledge problem directly, by creating a massive database that would contain all the mundane facts that the average person knows. Douglas Lenat, who started and led the project, argued that there is no shortcut ― the only way for machines to know the meaning of human concepts is to teach them, one concept at a time, by hand. The project was not expected to be completed for many decades.[161]

Chess playing programs HiTech and Deep Thought defeated chess masters in 1989. Both were developed by Carnegie Mellon University; Deep Thought development paved the way for Deep Blue.[162]

Money returns: Fifth Generation project edit

In 1981, the Japanese Ministry of International Trade and Industry set aside $850 million for the Fifth generation computer project. Their objectives were to write programs and build machines that could carry on conversations, translate languages, interpret pictures, and reason like human beings.[163] Much to the chagrin of scruffies, they chose Prolog as the primary computer language for the project.[164]

Other countries responded with new programs of their own. The UK began the £350 million Alvey project. A consortium of American companies formed the Microelectronics and Computer Technology Corporation (or "MCC") to fund large scale projects in AI and information technology.[165][166] DARPA responded as well, founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988.[167]

 
A Hopfield net with four nodes

Revival of neural networks edit

In 1982, physicist John Hopfield was able to prove that a form of neural network (now called a "Hopfield net") could learn and process information, and provably converges after enough time under any fixed condition. It was a breakthrough, as it was previously thought that nonlinear networks would, in general, evolve chaotically.[168]

Around the same time, Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called "backpropagation", also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa (1970) and applied to neural networks by Paul Werbos. These two discoveries helped to revive the exploration of artificial neural networks.[166][169]

Starting with the 1986 publication of the Parallel Distributed Processing, a two volume collection of papers edited by Rumelhart and psychologist James McClelland, neural networks research gained new momentum and would become commercially successful in the 1990s, applied to optical character recognition and speech recognition.[166][170]

The development of metal–oxide–semiconductor (MOS) very-large-scale integration (VLSI), in the form of complementary MOS (CMOS) technology, enabled the development of practical artificial neural network technology in the 1980s.

A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A. Mead and Mohammed Ismail.[171]

Bust: second AI winter (1987–1993) edit

The business community's fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble. As dozens of companies failed, the perception was that the technology was not viable.[172] However, the field continued to make advances despite the criticism. Numerous researchers, including robotics developers Rodney Brooks and Hans Moravec, argued for an entirely new approach to artificial intelligence.

AI winter edit

The term "AI winter" was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow.[173] Their fears were well founded: in the late 1980s and early 1990s, AI suffered a series of financial setbacks.

The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987. Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others. There was no longer a good reason to buy them. An entire industry worth half a billion dollars was demolished overnight.[174]

Eventually the earliest successful expert systems, such as XCON, proved too expensive to maintain. They were difficult to update, they could not learn, they were "brittle" (i.e., they could make grotesque mistakes when given unusual inputs), and they fell prey to problems (such as the qualification problem) that had been identified years earlier. Expert systems proved useful, but only in a few special contexts.[175]

In the late 1980s, the Strategic Computing Initiative cut funding to AI "deeply and brutally". New leadership at DARPA had decided that AI was not "the next wave" and directed funds towards projects that seemed more likely to produce immediate results.[176]

By 1991, the impressive list of goals penned in 1981 for Japan's Fifth Generation Project had not been met. Indeed, some of them, like "carry on a casual conversation" had not been met by 2010.[177] As with other AI projects, expectations had run much higher than what was actually possible.[177][178]

Over 300 AI companies had shut down, gone bankrupt, or been acquired by the end of 1993, effectively ending the first commercial wave of AI.[179] In 1994, HP Newquist stated in The Brain Makers that "The immediate future of artificial intelligence—in its commercial form—seems to rest in part on the continued success of neural networks."[179]

Nouvelle AI and embodied reason edit

In the late 1980s, several researchers advocated a completely new approach to artificial intelligence, based on robotics.[180] They believed that, to show real intelligence, a machine needs to have a body — it needs to perceive, move, survive and deal with the world. They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill (see Moravec's paradox). They advocated building intelligence "from the bottom up."[181]

The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties. Another precursor was David Marr, who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision. He rejected all symbolic approaches (both McCarthy's logic and Minsky's frames), arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place. (Marr's work would be cut short by leukemia in 1980.)[182]

In his 1990 paper "Elephants Don't Play Chess,"[183] robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis, arguing that symbols are not always necessary since "the world is its own best model. It is always exactly up to date. It always has every detail there is to be known. The trick is to sense it appropriately and often enough."[184] In the 1980s and 1990s, many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning, a theory called the embodied mind thesis.[185]

AI (1993–2011) edit

The field of AI, now more than a half a century old, finally achieved some of its oldest goals. It began to be used successfully throughout the technology industry, although somewhat behind the scenes. Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability. Still, the reputation of AI, in the business world at least, was less than pristine.[186] Inside the field there was little agreement on the reasons for AI's failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s. Together, all these factors helped to fragment AI into competing subfields focused on particular problems or approaches, sometimes even under new names that disguised the tarnished pedigree of "artificial intelligence".[187] AI was both more cautious and more successful than it had ever been.

Milestones and Moore's law edit

On 11 May 1997, Deep Blue became the first computer chess-playing system to beat a reigning world chess champion, Garry Kasparov.[188] The super computer was a specialized version of a framework produced by IBM, and was capable of processing twice as many moves per second as it had during the first match (which Deep Blue had lost), reportedly 200,000,000 moves per second.[189]

In 2005, a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail.[190] Two years later, a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws.[191] In February 2011, in a Jeopardy! quiz show exhibition match, IBM's question answering system, Watson, defeated the two greatest Jeopardy! champions, Brad Rutter and Ken Jennings, by a significant margin.[192]

These successes were not due to some revolutionary new paradigm, but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computer by the 90s.[193] In fact, Deep Blue's computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951.[194] This dramatic increase is measured by Moore's law, which predicts that the speed and memory capacity of computers doubles every two years, as a result of metal–oxide–semiconductor (MOS) transistor counts doubling every two years. The fundamental problem of "raw computer power" was slowly being overcome.

Intelligent agents edit

A new paradigm called "intelligent agents" became widely accepted during the 1990s.[195] Although earlier researchers had proposed modular "divide and conquer" approaches to AI,[196] the intelligent agent did not reach its modern form until Judea Pearl, Allen Newell, Leslie P. Kaelbling, and others brought concepts from decision theory and economics into the study of AI.[197] When the economist's definition of a rational agent was married to computer science's definition of an object or module, the intelligent agent paradigm was complete.

An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success. By this definition, simple programs that solve specific problems are "intelligent agents", as are human beings and organizations of human beings, such as firms. The intelligent agent paradigm defines AI research as "the study of intelligent agents". This is a generalization of some earlier definitions of AI: it goes beyond studying human intelligence; it studies all kinds of intelligence.[198]

The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful. It provided a common language to describe problems and share their solutions with each other, and with other fields that also used concepts of abstract agents, like economics and control theory. It was hoped that a complete agent architecture (like Newell's SOAR) would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents.[197][199]

Probabilistic reasoning and greater rigor edit

AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past.[200] There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics, electrical engineering, economics or operations research. The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable; AI had become a more rigorous "scientific" discipline.

Judea Pearl's influential 1988 book[201] brought probability and decision theory into AI. Among the many new tools in use were Bayesian networks, hidden Markov models, information theory, stochastic modeling and classical optimization. Precise mathematical descriptions were also developed for "computational intelligence" paradigms like neural networks and evolutionary algorithms.[202]

AI behind the scenes edit

Algorithms originally developed by AI researchers began to appear as parts of larger systems. AI had solved a lot of very difficult problems[203] and their solutions proved to be useful throughout the technology industry,[204] such as data mining, industrial robotics, logistics,[205]speech recognition,[206] banking software,[207] medical diagnosis[207] and Google's search engine.[208]

The field of AI received little or no credit for these successes in the 1990s and early 2000s. Many of AI's greatest innovations have been reduced to the status of just another item in the tool chest of computer science.[209] Nick Bostrom explains "A lot of cutting edge AI has filtered into general applications, often without being called AI because once something becomes useful enough and common enough it's not labeled AI anymore."[210]

Many researchers in AI in the 1990s deliberately called their work by other names, such as informatics, knowledge-based systems, cognitive systems or computational intelligence. In part, this may have been because they considered their field to be fundamentally different from AI, but also the new names help to procure funding. In the commercial world at least, the failed promises of the AI Winter continued to haunt AI research into the 2000s, as the New York Times reported in 2005: "Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild-eyed dreamers."[211][212][213][214]

Deep learning, big data and artificial general intelligence: 2011–present edit

In the first decades of the 21st century, access to large amounts of data (known as "big data"), cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy. In fact, McKinsey Global Institute estimated in their famous paper "Big data: The next frontier for innovation, competition, and productivity" that "by 2009, nearly all sectors in the US economy had at least an average of 200 terabytes of stored data".

By 2016, the market for AI-related products, hardware, and software reached more than 8 billion dollars, and the New York Times reported that interest in AI had reached a "frenzy".[215] The applications of big data began to reach into other fields as well, such as training models in ecology[216] and for various applications in economics.[217] Advances in deep learning (particularly deep convolutional neural networks and recurrent neural networks) drove progress and research in image and video processing, text analysis, and even speech recognition.[218]

The first global AI Safety Summit was held in Bletchley Park in November 2023 to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks.[219] 28 countries including the United States, China, and the European Union issued a declaration at the start of the summit, calling for international co-operation to manage the challenges and risks of artificial intelligence.[220][221]

Deep learning edit

Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers.[218] According to the Universal approximation theorem, deep-ness isn't necessary for a neural network to be able to approximate arbitrary continuous functions. Even so, there are many problems that are common to shallow networks (such as overfitting) that deep networks help avoid.[222] As such, deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts.

However, deep learning has problems of its own. A common problem for recurrent neural networks is the vanishing gradient problem, which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero. There have been many methods developed to approach this problem, such as Long short-term memory units.

State-of-the-art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision, specifically on things like the MNIST database, and traffic sign recognition.[223]

Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions (such as IBM Watson), and recent developments in deep learning have produced astounding results in competing with humans, in things like Go, and Doom (which, being a first-person shooter game, has sparked some controversy).[224][225][226][227]

Big Data edit

Big data refers to a collection of data that cannot be captured, managed, and processed by conventional software tools within a certain time frame. It is a massive amount of decision-making, insight, and process optimization capabilities that require new processing models. In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke, big data means that instead of random analysis (sample survey), all data is used for analysis. The 5V characteristics of big data (proposed by IBM): Volume, Velocity, Variety[228], Value[229], Veracity[230].

The strategic significance of big data technology is not to master huge data information, but to specialize in these meaningful data. In other words, if big data is likened to an industry, the key to realizing profitability in this industry is to increase the "process capability" of the data and realize the "value added" of the data through "processing".

Large language models edit

In 2017, the transformer architecture was proposed by Google researchers. It exploits an attention mechanism and later became widely used in large language models.[231]

Foundation models, which are large language models trained on vast quantities of unlabeled data that can be adapted to a wide range of downstream tasks, began to be developed in 2018.

Models such as GPT-3 released by OpenAI in 2020, and Gato released by DeepMind in 2022, have been described as important achievements of machine learning.

In 2023, Microsoft Research tested the GPT-4 large language model with a large variety of tasks, and concluded that "it could reasonably be viewed as an early (yet still incomplete) version of an artificial general intelligence (AGI) system".[232]

See also edit

Notes edit

  1. ^ a b c d e f Copeland, J (Ed.) (2004). The Essential Turing: the ideas that gave birth to the computer age. Oxford: Clarendon Press. ISBN 0-19-825079-7.
  2. ^ Kaplan, Andreas; Haenlein, Michael (2019). "Siri, Siri, in my hand: Who's the fairest in the land? On the interpretations, illustrations, and implications of artificial intelligence". Business Horizons. 62: 15–25. doi:10.1016/j.bushor.2018.08.004. S2CID 158433736.
  3. ^ Newquist 1994, pp. 143–156.
  4. ^ Newquist 1994, pp. 144–152.
  5. ^ The Talos episode in Argonautica 4
  6. ^ Bibliotheke 1.9.26
  7. ^ Rhodios, Apollonios (2007). The Argonautika : Expanded Edition. University of California Press. p. 355. ISBN 978-0-520-93439-9. OCLC 811491744.
  8. ^ Morford, Mark (2007). Classical mythology. Oxford. p. 184. ISBN 978-0-19-085164-4. OCLC 1102437035.{{cite book}}: CS1 maint: location missing publisher (link)
  9. ^ Linden, Stanton J. (2003). The alchemy reader : from Hermes Trismegistus to Isaac Newton. New York: Cambridge University Press. pp. Ch. 18. ISBN 0-521-79234-7. OCLC 51210362.
  10. ^ Kressel, Matthew (1 October 2015). "36 Days of Judaic Myth: Day 24, The Golem of Prague". Matthew Kressel. Retrieved 15 March 2020.
  11. ^ Newquist 1994, p. [page needed].
  12. ^ "GOLEM". www.jewishencyclopedia.com. Retrieved 15 March 2020.
  13. ^ Newquist 1994, p. 38.
  14. ^ "Sanhedrin 65b". www.sefaria.org. Retrieved 15 March 2020.
  15. ^ O'Connor, Kathleen Malone (1994). "The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam". Dissertations Available from ProQuest: 1–435.
  16. ^ Goethe, Johann Wolfgang von (1890). Faust; a tragedy. Translated, in the original metres ... by Bayard Taylor. Authorised ed., published by special arrangement with Mrs. Bayard Taylor. With a biographical introd. London Ward, Lock.
  17. ^ McCorduck 2004, pp. 17–25.
  18. ^ Butler 1863.
  19. ^ Newquist 1994, p. 65.
  20. ^ Cave, Stephen; Dihal, Kanta (2019). "Hopes and fears for intelligent machines in fiction and reality". Nature Machine Intelligence. 1 (2): 74–78. doi:10.1038/s42256-019-0020-9. ISSN 2522-5839. S2CID 150700981.
  21. ^ Needham 1986, p. 53.
  22. ^ McCorduck 2004, p. 6.
  23. ^ Nick 2005.
  24. ^ McCorduck 2004, p. 17.
  25. ^ Levitt 2000.
  26. ^ Newquist 1994, p. 30.
  27. ^ Quoted in McCorduck 2004, p. 8. Crevier 1993, p. 1 and McCorduck 2004, pp. 6–9 discusses sacred statues.
  28. ^ Other important automata were built by Haroun al-Rashid (McCorduck 2004, p. 10), Jacques de Vaucanson (Newquist 1994, p. 40), (McCorduck 2004, p. 16) and Leonardo Torres y Quevedo (McCorduck 2004, pp. 59–62)
  29. ^ Cave, S.; Dihal, K.; Dillon, S. (2020). AI Narratives: A History of Imaginative Thinking about Intelligent Machines. Oxford University Press. p. 56. ISBN 978-0-19-884666-6. Retrieved 2 May 2023.
  30. ^ Butler, E. M. (Eliza Marian) (1948). The myth of the magus. London: Cambridge University Press. ISBN 0-521-22564-7. OCLC 5063114.
  31. ^ Porterfield, A. (2006). The Protestant Experience in America. American religious experience. Greenwood Press. p. 136. ISBN 978-0-313-32801-5. Retrieved 15 May 2023.
  32. ^ Hollander, Lee M. (1964). Heimskringla; history of the kings of Norway. Austin: Published for the American-Scandinavian Foundation by the University of Texas Press. ISBN 0-292-73061-6. OCLC 638953.
  33. ^ a b c d Berlinski 2000.
  34. ^ Cfr. Carreras Artau, Tomás y Joaquín. Historia de la filosofía española. Filosofía cristiana de los siglos XIII al XV. Madrid, 1939, Volume I
  35. ^ Bonner, Anthonny, The Art and Logic of Ramón Llull: A User's Guide, Brill, 2007.
  36. ^ Anthony Bonner (ed.), Doctor Illuminatus. A Ramon Llull Reader (Princeton University 1985). Vid. "Llull's Influence: The History of Lullism" at 57–71
  37. ^ 17th century mechanism and AI:
  38. ^ Hobbes and AI:
  39. ^ Leibniz and AI:
  40. ^ The Lambda calculus was especially important to AI, since it was an inspiration for Lisp (the most important programming language used in AI). (Crevier 1993, pp. 190 196, 61)
  41. ^ The original photo can be seen in the article: Rose, Allen (April 1946). "Lightning Strikes Mathematics". Popular Science: 83–86. Retrieved 15 April 2012.
  42. ^ Newquist 1994, p. 56.
  43. ^ The Turing machine: McCorduck 2004, pp. 63–64, Crevier 1993, pp. 22–24, Russell & Norvig 2003, p. 8 and see Turing 1936–37
  44. ^ Couturat 1901.
  45. ^ Russell & Norvig 2021, p. 15.
  46. ^ Russell & Norvig (2021, p. 15); Newquist (1994, p. 67)
  47. ^ Randall (1982, p. 4–5); Byrne (2012); Mulvihill (2012)
  48. ^ Randall (1982, p. 6, 11–13); Quevedo (1914); Quevedo (1915)
  49. ^ Randall 1982, pp. 13, 16–17.
  50. ^ Quoted in Russell & Norvig (2021, p. 15)
  51. ^ Menabrea & Lovelace 1843.
  52. ^ a b Russell & Norvig 2021, p. 14.
  53. ^ McCorduck 2004, pp. 76–80.
  54. ^ Kaplan, Andreas. "Artificial Intelligence, Business and Civilization - Our Fate Made in Machines". Retrieved 11 March 2022.
  55. ^ McCorduck 2004, pp. 51–57, 80–107, Crevier 1993, pp. 27–32, Russell & Norvig 2003, pp. 15, 940, Moravec 1988, p. 3, Cordeschi 2002, Chap. 5.
  56. ^ McCorduck 2004, p. 98, Crevier 1993, pp. 27–28, Russell & Norvig 2003, pp. 15, 940, Moravec 1988, p. 3, Cordeschi 2002, Chap. 5.
  57. ^ McCulloch, Warren S.; Pitts, Walter (1 December 1943). "A logical calculus of the ideas immanent in nervous activity". Bulletin of Mathematical Biophysics. 5 (4): 115–133. doi:10.1007/BF02478259. ISSN 1522-9602.
  58. ^ Piccinini, Gualtiero (1 August 2004). "The First Computational Theory of Mind and Brain: A Close Look at Mcculloch and Pitts's "Logical Calculus of Ideas Immanent in Nervous Activity"". Synthese. 141 (2): 175–215. doi:10.1023/B:SYNT.0000043018.52445.3e. ISSN 1573-0964. S2CID 10442035.
  59. ^ McCorduck 2004, pp. 51–57, 88–94, Crevier 1993, p. 30, Russell & Norvig 2003, p. 15−16, Cordeschi 2002, Chap. 5 and see also McCullough & Pitts 1943
  60. ^ McCorduck 2004, p. 102, Crevier 1993, pp. 34–35 and Russell & Norvig 2003, p. 17
  61. ^ McCorduck 2004, pp. 70–72, Crevier 1993, p. 22−25, Russell & Norvig 2003, pp. 2–3 and 948, Haugeland 1985, pp. 6–9, Cordeschi 2002, pp. 170–176. See also Turing 1950
  62. ^ Newquist 1994, pp. 92–98.
  63. ^ Russell & Norvig (2003, p. 948) claim that Turing answered all the major objections to AI that have been offered in the years since the paper appeared.
  64. ^ See "A Brief History of Computing" at AlanTuring.net.
  65. ^ Schaeffer, Jonathan. One Jump Ahead:: Challenging Human Supremacy in Checkers, 1997,2009, Springer, ISBN 978-0-387-76575-4. Chapter 6.
  66. ^ McCorduck 2004, pp. 137–170, Crevier 1993, pp. 44–47
  67. ^ McCorduck 2004, pp. 123–125, Crevier 1993, pp. 44–46 and Russell & Norvig 2003, p. 17
  68. ^ Quoted in Crevier 1993, p. 46 and Russell & Norvig 2003, p. 17
  69. ^ Russell & Norvig 2003, p. 947,952
  70. ^ Chatterjee, Sheshadri; N.S., Sreenivasulu; Hussain, Zahid (1 January 2021). "Evolution of artificial intelligence and its impact on human rights: from sociolegal perspective". International Journal of Law and Management. 64 (2): 184–205. doi:10.1108/IJLMA-06-2021-0156. ISSN 1754-243X.
  71. ^ McCorduck 2004, pp. 111–136, Crevier 1993, pp. 49–51 and Russell & Norvig 2003, p. 17 Newquist 1994, pp. 91–112
  72. ^ See McCarthy et al. 1955. Also, see Crevier 1993, p. 48 where Crevier states "[the proposal] later became known as the 'physical symbol systems hypothesis'". The physical symbol system hypothesis was articulated and named by Newell and Simon in their paper on GPS. (Newell & Simon 1963) It includes a more specific definition of a "machine" as an agent that manipulates symbols. See the philosophy of artificial intelligence.
  73. ^ McCorduck (2004, pp. 129–130) discusses how the Dartmouth conference alumni dominated the first two decades of AI research, calling them the "invisible college".
  74. ^ "I won't swear and I hadn't seen it before," McCarthy told Pamela McCorduck in 1979. (McCorduck 2004, p. 114) However, McCarthy also stated unequivocally "I came up with the term" in a CNET interview. (Skillings 2006)
  75. ^ McCarthy, John (1988). "Review of The Question of Artificial Intelligence". Annals of the History of Computing. 10 (3): 224–229., collected in McCarthy, John (1996). "10. Review of The Question of Artificial Intelligence". Defending AI Research: A Collection of Essays and Reviews. CSLI., p. 73 "[O]ne of the reasons for inventing the term "artificial intelligence" was to escape association with "cybernetics". Its concentration on analog feedback seemed misguided, and I wished to avoid having either to accept Norbert (not Robert) Wiener as a guru or having to argue with him."
  76. ^ Crevier (1993, pp. 49) writes "the conference is generally recognized as the official birthdate of the new science."
  77. ^ Russell and Norvig write "it was astonishing whenever a computer did anything remotely clever." Russell & Norvig 2003, p. 18
  78. ^ Crevier 1993, pp. 52–107, Moravec 1988, p. 9 and Russell & Norvig 2003, p. 18−21
  79. ^ McCorduck 2004, p. 218, Newquist 1994, pp. 91–112, Crevier 1993, pp. 108–109 and Russell & Norvig 2003, p. 21
  80. ^ Crevier 1993, pp. 52–107, Moravec 1988, p. 9
  81. ^ Means-ends analysis, reasoning as search: McCorduck 2004, pp. 247–248. Russell & Norvig 2003, pp. 59–61
  82. ^ Heuristic: McCorduck 2004, p. 246, Russell & Norvig 2003, pp. 21–22
  83. ^ GPS: McCorduck 2004, pp. 245–250, Crevier 1993, p. GPS?, Russell & Norvig 2003, p. GPS?
  84. ^ Crevier 1993, pp. 51–58, 65–66 and Russell & Norvig 2003, pp. 18–19
  85. ^ McCorduck 2004, pp. 268–271, Crevier 1993, pp. 95–96, Newquist 1994, pp. 148–156, Moravec 1988, pp. 14–15
  86. ^ Rosenblatt, Frank. Principles of neurodynamics: Perceptrons and the theory of brain mechanisms. Vol. 55. Washington, DC: Spartan books, 1962.
  87. ^ Widrow, B.; Lehr, M.A. (September 1990). "30 years of adaptive neural networks: perceptron, Madaline, and backpropagation". Proceedings of the IEEE. 78 (9): 1415–1442. doi:10.1109/5.58323.
  88. ^ Rosen, Charles A., Nils J. Nilsson, and Milton B. Adams. "" Proposal for Research SRI No. ESU 65-1, 8 January 1965.
  89. ^ Nilsson, Nils J. . Artificial Intelligence Center, SRI International, 1984.
  90. ^ Hart, Peter E.; Nilsson, Nils J.; Perrault, Ray; Mitchell, Tom; Kulikowski, Casimir A.; Leake, David B. (15 March 2003). "In Memoriam: Charles Rosen, Norman Nielsen, and Saul Amarel". AI Magazine. 24 (1): 6–6. doi:10.1609/aimag.v24i1.1683. ISSN 2371-9621.
  91. ^ Nilsson, Nils J. (2009). "Section 4.2: Neural Networks". The Quest for Artificial Intelligence. Cambridge: Cambridge University Press. doi:10.1017/cbo9780511819346. ISBN 978-0-521-11639-8.
  92. ^ a b Nielson, Donald L. (1 January 2005). "Chapter 4: The Life and Times of a Successful SRI Laboratory: Artificial Intelligence and Robotics" (PDF). A HERITAGE OF INNOVATION SRI's First Half Century (1st ed.). SRI International. ISBN 978-0-9745208-0-3.
  93. ^ a b c d Olazaran Rodriguez, Jose Miguel. . PhD Dissertation. University of Edinburgh, 1991. See especially Chapter 2 and 3.
  94. ^ McCorduck 2004, p. 286, Crevier 1993, pp. 76–79, Russell & Norvig 2003, p. 19
  95. ^ Crevier 1993, pp. 79–83
  96. ^ Crevier 1993, pp. 164–172
  97. ^ McCorduck 2004, pp. 291–296, Crevier 1993, pp. 134–139
  98. ^ McCorduck 2004, pp. 299–305, Crevier 1993, pp. 83–102, Russell & Norvig 2003, p. 19 and Copeland 2000
  99. ^ McCorduck 2004, pp. 300–305, Crevier 1993, pp. 84–102, Russell & Norvig 2003, p. 19
  100. ^ "Humanoid History -WABOT-".
  101. ^ Zeghloul, Saïd; Laribi, Med Amine; Gazeau, Jean-Pierre (21 September 2015). Robotics and Mechatronics: Proceedings of the 4th IFToMM International Symposium on Robotics and Mechatronics. Springer. ISBN 9783319223681 – via Google Books.
  102. ^ "Historical Android Projects". androidworld.com.
  103. ^ Robots: From Science Fiction to Technological Revolution, page 130
  104. ^ Duffy, Vincent G. (19 April 2016). Handbook of Digital Human Modeling: Research for Applied Ergonomics and Human Factors Engineering. CRC Press. ISBN 9781420063523 – via Google Books.
  105. ^ Simon & Newell 1958, p. 7−8 quoted in Crevier 1993, p. 108. See also Russell & Norvig 2003, p. 21
  106. ^ Simon 1965, p. 96 quoted in Crevier 1993, p. 109
  107. ^ Minsky 1967, p. 2 quoted in Crevier 1993, p. 109
  108. ^ Minsky strongly believes he was misquoted. See McCorduck 2004, pp. 272–274, Crevier 1993, p. 96 and Darrach 1970.
  109. ^ Crevier 1993, pp. 64–65
  110. ^ Crevier 1993, p. 94
  111. ^ Howe 1994
  112. ^ McCorduck 2004, p. 131, Crevier 1993, p. 51. McCorduck also notes that funding was mostly under the direction of alumni of the Dartmouth workshop of 1956.
  113. ^ Crevier 1993, p. 65
  114. ^ Crevier 1993, pp. 68–71 and Turkle 1984
  115. ^ Crevier 1993, pp. 100–144 and Russell & Norvig 2003, pp. 21–22
  116. ^ McCorduck 2004, pp. 104–107, Crevier 1993, pp. 102–105, Russell & Norvig 2003, p. 22
  117. ^ Crevier 1993, pp. 163–196
  118. ^ Crevier 1993, p. 146
  119. ^ Russell & Norvig 2003, pp. 20–21Newquist 1994, pp. 336
  120. ^ Crevier 1993, pp. 146–148, see also Buchanan 2005, p. 56: "Early programs were necessarily limited in scope by the size and speed of memory"
  121. ^ Moravec 1976. McCarthy has always disagreed with Moravec, back to their early days together at SAIL. He states "I would say that 50 years ago, the machine capability was much too small, but by 30 years ago, machine capability wasn't the real problem." in a CNET interview. (Skillings 2006)
  122. ^ Hans Moravec, ROBOT: Mere Machine to Transcendent Mind
  123. ^ Russell & Norvig 2003, pp. 9, 21–22 and Lighthill 1973
  124. ^ McCorduck 2004, pp. 300 & 421; Crevier 1993, pp. 113–114; Moravec 1988, p. 13; Lenat & Guha 1989, (Introduction); Russell & Norvig 2003, p. 21
  125. ^ McCorduck 2004, p. 456, Moravec 1988, pp. 15–16
  126. ^ McCarthy & Hayes 1969, Crevier 1993, pp. 117–119
  127. ^ McCorduck 2004, pp. 280–281, Crevier 1993, p. 110, Russell & Norvig 2003, p. 21 and NRC 1999 under "Success in Speech Recognition".
  128. ^ Crevier 1993, p. 117, Russell & Norvig 2003, p. 22, Howe 1994 and see also Lighthill 1973.
  129. ^ Russell & Norvig 2003, p. 22, Lighthill 1973, John McCarthy wrote in response that "the combinatorial explosion problem has been recognized in AI from the beginning" in Review of Lighthill report
  130. ^ Crevier 1993, pp. 115–116 (on whom this account is based). Other views include McCorduck 2004, pp. 306–313 and NRC 1999 under "Success in Speech Recognition".
  131. ^ Crevier 1993, p. 115. Moravec explains, "Their initial promises to DARPA had been much too optimistic. Of course, what they delivered stopped considerably short of that. But they felt they couldn't in their next proposal promise less than in the first one, so they promised more."
  132. ^ NRC 1999 under "Shift to Applied Research Increases Investment." While the autonomous tank was a failure, the battle management system (called "DART") proved to be enormously successful, saving billions in the first Gulf War, repaying the investment and justifying the DARPA's pragmatic policy, at least as far as DARPA was concerned.
  133. ^ Lucas and Penrose' critique of AI: Crevier 1993, p. 22, Russell & Norvig 2003, pp. 949–950, Hofstadter 1999, pp. 471–477 and see Lucas 1961
  134. ^ "Know-how" is Dreyfus' term. (Dreyfus makes a distinction between "knowing how" and "knowing that", a modern version of Heidegger's distinction of ready-to-hand and present-at-hand.) (Dreyfus & Dreyfus 1986)
  135. ^ Dreyfus' critique of artificial intelligence: McCorduck 2004, pp. 211–239, Crevier 1993, pp. 120–132, Russell & Norvig 2003, pp. 950–952 and see Dreyfus 1965, Dreyfus 1972, Dreyfus & Dreyfus 1986
  136. ^ Searle's critique of AI: McCorduck 2004, pp. 443–445, Crevier 1993, pp. 269–271, Russell & Norvig 2003, pp. 958–960 and see Searle 1980
  137. ^ Quoted in Crevier 1993, p. 143
  138. ^ Quoted in Crevier 1993, p. 122
  139. ^ Newquist 1994, pp. 276
  140. ^ "I became the only member of the AI community to be seen eating lunch with Dreyfus. And I deliberately made it plain that theirs was not the way to treat a human being." Joseph Weizenbaum, quoted in Crevier 1993, p. 123.
  141. ^ Colby, Watt & Gilbert 1966, p. 148. Weizenbaum referred to this text in Weizenbaum 1976, pp. 5, 6. Colby and his colleagues later also developed chatterbot-like "computer simulations of paranoid processes (PARRY)" to "make intelligible paranoid processes in explicit symbol processing terms." (Colby 1974, p. 6)
  142. ^ Weizenbaum's critique of AI: McCorduck 2004, pp. 356–373, Crevier 1993, pp. 132–144, Russell & Norvig 2003, p. 961 and see Weizenbaum 1976
  143. ^ McCorduck 2004, p. 51, Russell & Norvig 2003, pp. 19, 23
  144. ^ McCorduck 2004, p. 51, Crevier 1993, pp. 190–192
  145. ^ Crevier 1993, pp. 193–196
  146. ^ Crevier 1993, pp. 145–149, 258–63
  147. ^ Wason & Shapiro (1966) showed that people do poorly on completely abstract problems, but if the problem is restated to allow the use of intuitive social intelligence, performance dramatically improves. (See Wason selection task) Kahneman, Slovic & Tversky (1982) have shown that people are terrible at elementary problems that involve uncertain reasoning. (See list of cognitive biases for several examples). Eleanor Rosch's work is described in Lakoff 1987
  148. ^ An early example of McCathy's position was in the journal Science where he said "This is AI, so we don't care if it's psychologically real" (Kolata 1982), and he recently reiterated his position at the AI@50 conference where he said "Artificial intelligence is not, by definition, simulation of human intelligence" (Maker 2006).
  149. ^ Crevier 1993, pp. 175
  150. ^ Neat vs. scruffy: McCorduck 2004, pp. 421–424 (who picks up the state of the debate in 1984). Crevier 1993, pp. 168 (who documents Schank's original use of the term). Another aspect of the conflict was called "the procedural/declarative distinction" but did not prove to be influential in later AI research.
  151. ^ McCorduck 2004, pp. 305–306, Crevier 1993, pp. 170–173, 246 and Russell & Norvig 2003, p. 24. Minsky's frame paper: Minsky 1974.
  152. ^ Newquist 1994, pp. 189–192
  153. ^ McCorduck 2004, pp. 327–335 (Dendral), Crevier 1993, pp. 148–159, Newquist 1994, p. 271, Russell & Norvig 2003, pp. 22–23
  154. ^ Crevier 1993, pp. 158–159 and Russell & Norvig 2003, p. 23−24
  155. ^ Crevier 1993, p. 198
  156. ^ Newquist 1994, pp. 259
  157. ^ McCorduck 2004, pp. 434–435, Crevier 1993, pp. 161–162, 197–203, Newquist 1994, pp. 275 and Russell & Norvig 2003, p. 24
  158. ^ McCorduck 2004, p. 299
  159. ^ McCorduck 2004, pp. 421
  160. ^ Knowledge revolution: McCorduck 2004, pp. 266–276, 298–300, 314, 421, Newquist 1994, pp. 255–267, Russell & Norvig 2003, pp. 22–23
  161. ^ Cyc: McCorduck 2004, p. 489, Crevier 1993, pp. 239–243, Newquist 1994, pp. 431–455, Russell & Norvig 2003, p. 363−365 and Lenat & Guha 1989
  162. ^ (PDF). Archived from the original (PDF) on 8 October 2007. Retrieved 1 September 2007.
  163. ^ McCorduck 2004, pp. 436–441, Newquist 1994, pp. 231–240, Crevier 1993, pp. 211, Russell & Norvig 2003, p. 24 and see also Feigenbaum & McCorduck 1983
  164. ^ Crevier 1993, pp. 195
  165. ^ Crevier 1993, pp. 240.
  166. ^ a b c Russell & Norvig 2003, p. 25
  167. ^ McCorduck 2004, pp. 426–432, NRC 1999 under "Shift to Applied Research Increases Investment"
  168. ^ Sejnowski, Terrence J. (23 October 2018). The Deep Learning Revolution (1st ed.). Cambridge, Massachusetts London, England: The MIT Press. pp. 93–94. ISBN 978-0-262-03803-4.
  169. ^ Crevier 1993, pp. 214–215.
  170. ^ Crevier 1993, pp. 215–216.
  171. ^ Mead, Carver A.; Ismail, Mohammed (8 May 1989). Analog VLSI Implementation of Neural Systems (PDF). The Kluwer International Series in Engineering and Computer Science. Vol. 80. Norwell, MA: Kluwer Academic Publishers. doi:10.1007/978-1-4613-1639-8. ISBN 978-1-4613-1639-8.
  172. ^ Newquist 1994, pp. 501
  173. ^ Crevier 1993, pp. 203. AI winter was first used as the title of a seminar on the subject for the Association for the Advancement of Artificial Intelligence.
  174. ^ Newquist 1994, pp. 359–379, McCorduck 2004, p. 435, Crevier 1993, pp. 209–210
  175. ^ McCorduck 2004, p. 435 (who cites institutional reasons for their ultimate failure), Newquist 1994, pp. 258–283 (who cites limited deployment within corporations), Crevier 1993, pp. 204–208 (who cites the difficulty of truth maintenance, i.e., learning and updating), Lenat & Guha 1989, Introduction (who emphasizes the brittleness and the inability to handle excessive qualification.)
  176. ^ McCorduck 2004, pp. 430–431
  177. ^ a b McCorduck 2004, p. 441, Crevier 1993, p. 212. McCorduck writes "Two and a half decades later, we can see that the Japanese didn't quite meet all of those ambitious goals."
  178. ^ Newquist 1994, pp. 476
  179. ^ a b Newquist 1994, pp. 440
  180. ^ McCorduck 2004, pp. 454–462
  181. ^ Moravec (1988, p. 20) writes: "I am confident that this bottom-up route to artificial intelligence will one date meet the traditional top-down route more than half way, ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs. Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts."
  182. ^ Crevier 1993, pp. 183–190.
  183. ^ Brooks, Robert A. (1990). "Elephants Don't Play Chess" (PDF). Robotics and Autonomous Systems. 6 (1–2): 3–15. doi:10.1016/S0921-8890(05)80025-9.
  184. ^ Brooks 1990, p. 3
  185. ^ See, for example, Lakoff & Johnson 1999
  186. ^ Newquist 1994, pp. 511
  187. ^ McCorduck (2004, p. 424) discusses the fragmentation and the abandonment of AI's original goals.
  188. ^ McCorduck 2004, pp. 480–483
  189. ^ "Deep Blue". IBM Research. Retrieved 10 September 2010.
  190. ^ . Archived from the original on 31 October 2007.
  191. ^ . Archived from the original on 5 March 2014. Retrieved 25 October 2011.
  192. ^ Markoff, John (16 February 2011). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times.
  193. ^ Kurzweil 2005, p. 274 writes that the improvement in computer chess, "according to common wisdom, is governed only by the brute force expansion of computer hardware."
  194. ^ Cycle time of Ferranti Mark 1 was 1.2 milliseconds, which is arguably equivalent to about 833 flops. Deep Blue ran at 11.38 gigaflops (and this does not even take into account Deep Blue's special-purpose hardware for chess). Very approximately, these differ by a factor of 107.
  195. ^ McCorduck 2004, pp. 471–478, Russell & Norvig 2003, p. 55, where they write: "The whole-agent view is now widely accepted in the field". The intelligent agent paradigm is discussed in major AI textbooks, such as: Russell & Norvig 2003, pp. 32–58, 968–972, Poole, Mackworth & Goebel 1998, pp. 7–21, Luger & Stubblefield 2004, pp. 235–240
  196. ^ Carl Hewitt's Actor model anticipated the modern definition of intelligent agents. (Hewitt, Bishop & Steiger 1973) Both John Doyle (Doyle 1983) and Marvin Minsky's popular classic The Society of Mind (Minsky 1986) used the word "agent". Other "modular" proposals included Rodney Brook's subsumption architecture, object-oriented programming and others.
  197. ^ a b Russell & Norvig 2003, pp. 27, 55
  198. ^ This is how the most widely accepted textbooks of the 21st century define artificial intelligence. See Russell & Norvig 2003, p. 32 and Poole, Mackworth & Goebel 1998, p. 1
  199. ^ McCorduck 2004, p. 478
  200. ^ McCorduck 2004, pp. 486–487, Russell & Norvig 2003, pp. 25–26
  201. ^ Pearl 1988
  202. ^ Russell & Norvig 2003, p. 25−26
  203. ^ See Applications of artificial intelligence § Computer science
  204. ^ NRC 1999 under "Artificial Intelligence in the 90s", and Kurzweil 2005, p. 264
  205. ^ Russell & Norvig 2003, p. 28
  206. ^ For the new state of the art in AI based speech recognition, see The Economist (2007)
  207. ^ a b "AI-inspired systems were already integral to many everyday technologies such as internet search engines, bank software for processing transactions and in medical diagnosis." Nick Bostrom, quoted in CNN 2006
  208. ^ Olsen (2004),Olsen (2006)
  209. ^ McCorduck 2004, p. 423, Kurzweil 2005, p. 265, Hofstadter 1999, p. 601 Newquist 1994, pp. 445
  210. ^ CNN 2006
  211. ^ Markoff 2005
  212. ^ The Economist 2007
  213. ^ Tascarella 2006
  214. ^ Newquist 1994, pp. 532
  215. ^ Steve Lohr (17 October 2016), "IBM Is Counting on Its Bet on Watson, and Paying Big Money for It", New York Times
  216. ^ Hampton, Stephanie E; Strasser, Carly A; Tewksbury, Joshua J; Gram, Wendy K; Budden, Amber E; Batcheller, Archer L; Duke, Clifford S; Porter, John H (1 April 2013). "Big data and the future of ecology". Frontiers in Ecology and the Environment. 11 (3): 156–162. doi:10.1890/120103. ISSN 1540-9309.
  217. ^ . bfi.uchicago.edu. Archived from the original on 18 June 2018. Retrieved 9 June 2017.
  218. ^ a b LeCun, Yann; Bengio, Yoshua; Hinton, Geoffrey (2015). "Deep learning". Nature. 521 (7553): 436–444. Bibcode:2015Natur.521..436L. doi:10.1038/nature14539. PMID 26017442. S2CID 3074096.
  219. ^ Milmo, Dan (3 November 2023). "Hope or Horror? The great AI debate dividing its pioneers". The Guardian Weekly. pp. 10–12.
  220. ^ . GOV.UK. 1 November 2023. Archived from the original on 1 November 2023. Retrieved 2 November 2023.
  221. ^ "Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration". GOV.UK (Press release). from the original on 1 November 2023. Retrieved 1 November 2023.
  222. ^ Baral, Chitta; Fuentes, Olac; Kreinovich, Vladik (June 2015). "Why Deep Neural Networks: A Possible Theoretical Explanation". Departmental Technical Reports (Cs). Retrieved 9 June 2017.
  223. ^ Ciregan, D.; Meier, U.; Schmidhuber, J. (June 2012). "Multi-column deep neural networks for image classification". 2012 IEEE Conference on Computer Vision and Pattern Recognition. pp. 3642–3649. arXiv:1202.2745. Bibcode:2012arXiv1202.2745C. CiteSeerX 10.1.1.300.3283. doi:10.1109/cvpr.2012.6248110. ISBN 978-1-4673-1228-8. S2CID 2161592.
  224. ^ Markoff, John (16 February 2011). "On 'Jeopardy!' Watson Win Is All but Trivial". The New York Times. ISSN 0362-4331. Retrieved 10 June 2017.
  225. ^ "AlphaGo: Mastering the ancient game of Go with Machine Learning". Research Blog. Retrieved 10 June 2017.
  226. ^ "Innovations of AlphaGo | DeepMind". DeepMind. Retrieved 10 June 2017.
  227. ^ University, Carnegie Mellon. "Computer Out-Plays Humans in "Doom"-CMU News - Carnegie Mellon University". www.cmu.edu. Retrieved 10 June 2017.
  228. ^ Laney, Doug (2001). "3D data management: Controlling data volume, velocity and variety". META Group Research Note. 6 (70).
  229. ^ Marr, Bernard (6 March 2014). "Big Data: The 5 Vs Everyone Must Know".
  230. ^ Goes, Paulo B. (2014). "Design science research in top information systems journals". MIS Quarterly: Management Information Systems. 38 (1).
  231. ^ Murgia, Madhumita (23 July 2023). "Transformers: the Google scientists who pioneered an AI revolution". www.ft.com. Retrieved 10 December 2023.
  232. ^ Bubeck, Sébastien; Chandrasekaran, Varun; Eldan, Ronen; Gehrke, Johannes; Horvitz, Eric; Kamar, Ece; Lee, Peter; Lee, Yin Tat; Li, Yuanzhi; Lundberg, Scott; Nori, Harsha; Palangi, Hamid; Ribeiro, Marco Tulio; Zhang, Yi (22 March 2023). "Sparks of Artificial General Intelligence: Early experiments with GPT-4". arXiv:2303.12712 [cs.CL].

References edit

  • Russell, Stuart J.; Norvig, Peter. (2021). Artificial Intelligence: A Modern Approach (4th ed.). Hoboken: Pearson. ISBN 978-0134610993. LCCN 20190474.
  • Couturat, Louis (1901), La Logique de Leibniz
  • Byrne, J. G. (8 December 2012). (PDF). Archived from the original on 16 April 2019. Retrieved 8 August 2019.
  • Mulvihill, Mary (17 October 2012). "Ingenious Ireland".
  • Quevedo, L. Torres Quevedo (1914), "Revista de la Academia de Ciencias Exacta", Ensayos sobre Automática – Su definicion. Extension teórica de sus aplicaciones, vol. 12, pp. 391–418
  • Quevedo, L. Torres Quevedo (1915), "Revue Génerale des Sciences Pures et Appliquées", Essais sur l'Automatique - Sa définition. Etendue théorique de ses applications, vol. 2, pp. 601–611
  • Randall, Brian (1982), "From Analytical Engine to Electronic Digital Computer: The Contributions of Ludgate, Torres, and Bush", fano.co.uk, retrieved 29 October 2018
  • Berlinski, David (2000), The Advent of the Algorithm, Harcourt Books, ISBN 978-0-15-601391-8, OCLC 46890682.
  • Buchanan, Bruce G. (Winter 2005), (PDF), AI Magazine, pp. 53–60, archived from the original (PDF) on 26 September 2007, retrieved 30 August 2007.
  • Butler, Samuel (13 June 1863), "Darwin Among the Machines", The Press, Christchurch, New Zealand, retrieved 10 October 2008.
  • Colby, Kenneth M.; Watt, James B.; Gilbert, John P. (1966), "A Computer Method of Psychotherapy: Preliminary Communication", The Journal of Nervous and Mental Disease, vol. 142, no. 2, pp. 148–152, doi:10.1097/00005053-196602000-00005, PMID 5936301, S2CID 36947398.
  • Colby, Kenneth M. (September 1974), Ten Criticisms of Parry (PDF), Stanford Artificial Intelligence Laboratory, REPORT NO. STAN-CS-74-457, retrieved 17 June 2018.
  • "AI set to exceed human brain power", CNN.com, 26 July 2006, retrieved 16 October 2007.
  • Copeland, Jack (2000), Micro-World AI, retrieved 8 October 2008.
  • Cordeschi, Roberto (2002), The Discovery of the Artificial, Dordrecht: Kluwer..
  • Crevier, Daniel (1993). AI: The Tumultuous Search for Artificial Intelligence. New York, NY: BasicBooks. ISBN 0-465-02997-3.
  • Darrach, Brad (20 November 1970), "Meet Shaky, the First Electronic Person", Life Magazine, pp. 58–68.
  • Doyle, J. (1983), "What is rational psychology? Toward a modern mental philosophy", AI Magazine, vol. 4, no. 3, pp. 50–53.
  • Dreyfus, Hubert (1965), Alchemy and AI, RAND Corporation Memo.
  • Dreyfus, Hubert (1972), What Computers Can't Do, New York: MIT Press, ISBN 978-0-06-090613-9, OCLC 5056816.
  • Dreyfus, Hubert; Dreyfus, Stuart (1986). Mind over Machine: The Power of Human Intuition and Expertise in the Era of the Computer. Oxford, UK: Blackwell. ISBN 978-0-02-908060-3. Retrieved 22 August 2020.
  • The Economist (7 June 2007), "Are You Talking to Me?", The Economist, retrieved 16 October 2008.
  • Feigenbaum, Edward A.; McCorduck, Pamela (1983), The Fifth Generation: Artificial Intelligence and Japan's Computer Challenge to the World, Michael Joseph, ISBN 978-0-7181-2401-4.
  • Haugeland, John (1985). Artificial Intelligence: The Very Idea. Cambridge, Mass.: MIT Press. ISBN 978-0-262-08153-5.
  • Hawkins, Jeff; Blakeslee, Sandra (2004), On Intelligence, New York, NY: Owl Books, ISBN 978-0-8050-7853-4, OCLC 61273290.
  • Hebb, D.O. (1949), The Organization of Behavior, New York: Wiley, ISBN 978-0-8058-4300-2, OCLC 48871099.
  • Hewitt, Carl; Bishop, Peter; Steiger, Richard (1973), (PDF), IJCAI, archived from the original (PDF) on 29 December 2009
  • Hobbes, Thomas (1651), Leviathan.
  • Hofstadter, Douglas (1999) [1979], Gödel, Escher, Bach: an Eternal Golden Braid, Basic Books, ISBN 978-0-465-02656-2, OCLC 225590743.
  • Howe, J. (November 1994), Artificial Intelligence at Edinburgh University: a Perspective, retrieved 30 August 2007.
  • Kahneman, Daniel; Slovic, D.; Tversky, Amos (1982). "Judgment under uncertainty: Heuristics and biases". Science. New York: Cambridge University Press. 185 (4157): 1124–1131. Bibcode:1974Sci...185.1124T. doi:10.1126/science.185.4157.1124. ISBN 978-0-521-28414-1. PMID 17835457. S2CID 143452957.
  • Kaplan, Andreas; Haenlein, Michael (2018), "Siri, Siri in my Hand, who's the Fairest in the Land? On the Interpretations, Illustrations and Implications of Artificial Intelligence", Business Horizons, 62: 15–25, doi:10.1016/j.bushor.2018.08.004, S2CID 158433736.
  • Kolata, G. (1982), "How can computers get common sense?", Science, 217 (4566): 1237–1238, Bibcode:1982Sci...217.1237K, doi:10.1126/science.217.4566.1237, PMID 17837639.
  • Kurzweil, Ray (2005), The Singularity is Near, Viking Press, ISBN 978-0-14-303788-0, OCLC 71826177.
  • Lakoff, George (1987), Women, Fire, and Dangerous Things: What Categories Reveal About the Mind, University of Chicago Press., ISBN 978-0-226-46804-4.
  • Lakoff G, Johnson M (1999). Philosophy in the flesh: The embodied mind and its challenge to western thought. Basic Books. ISBN 978-0-465-05674-3.
  • Lenat, Douglas; Guha, R. V. (1989), Building Large Knowledge-Based Systems, Addison-Wesley, ISBN 978-0-201-51752-1, OCLC 19981533.
  • Levitt, Gerald M. (2000), The Turk, Chess Automaton, Jefferson, N.C.: McFarland, ISBN 978-0-7864-0778-1.
  • Lighthill, Professor Sir James (1973), "Artificial Intelligence: A General Survey", Artificial Intelligence: a paper symposium, Science Research Council
  • Lucas, John (1961), , Philosophy, 36 (XXXVI): 112–127, doi:10.1017/S0031819100057983, S2CID 55408480, archived from the original on 19 August 2007, retrieved 15 October 2008
  • Luger, George; Stubblefield, William (2004). Artificial Intelligence: Structures and Strategies for Complex Problem Solving (5th ed.). Benjamin/Cummings. ISBN 978-0-8053-4780-7. Retrieved 17 December 2019.
  • Maker, Meg Houston (2006), , Dartmouth College, archived from the original on 8 October 2008, retrieved 16 October 2008{{citation}}: CS1 maint: location missing publisher (link)
  • Markoff, John (14 October 2005), "Behind Artificial Intelligence, a Squadron of Bright Real People", The New York Times, retrieved 16 October 2008
  • McCarthy, John; Minsky, Marvin; Rochester, Nathan; Shannon, Claude (31 August 1955), , archived from the original on 30 September 2008, retrieved 16 October 2008
  • McCarthy, John; Hayes, P. J. (1969), "Some philosophical problems from the standpoint of artificial intelligence", in Meltzer, B. J.; Mitchie, Donald (eds.), Machine Intelligence 4, Edinburgh University Press, pp. 463–502, retrieved 16 October 2008
  • McCorduck, Pamela (2004), Machines Who Think (2nd ed.), Natick, MA: A. K. Peters, Ltd., ISBN 978-1-56881-205-2, OCLC 52197627.
  • McCullough, W. S.; Pitts, W. (1943), "A logical calculus of the ideas immanent in nervous activity", Bulletin of Mathematical Biophysics, 5 (4): 115–127, doi:10.1007/BF02478259
  • Menabrea, Luigi Federico; Lovelace, Ada (1843), "Sketch of the Analytical Engine Invented by Charles Babbage", Scientific Memoirs, 3, retrieved 29 August 2008 With notes upon the Memoir by the Translator
  • Minsky, Marvin (1967), Computation: Finite and Infinite Machines, Englewood Cliffs, N.J.: Prentice-Hall
  • Minsky, Marvin; Papert, Seymour (1969), Perceptrons: An Introduction to Computational Geometry, The MIT Press, ISBN 978-0-262-63111-2, OCLC 16924756
  • Minsky, Marvin (1974), A Framework for Representing Knowledge, retrieved 16 October 2008
  • Minsky, Marvin (1986), The Society of Mind, Simon and Schuster, ISBN 978-0-671-65713-0, OCLC 223353010
  • Minsky, Marvin (2001), It's 2001. Where Is HAL?, Dr. Dobb's Technetcast, retrieved 8 August 2009
  • Moravec, Hans (1976), , archived from the original on 3 March 2016, retrieved 16 October 2008
  • Moravec, Hans (1988), Mind Children, Harvard University Press, ISBN 978-0-674-57618-6, OCLC 245755104
  • Needham, Joseph (1986). Science and Civilization in China: Volume 2. Taipei: Caves Books Ltd.
  • NRC (1999), "Developments in Artificial Intelligence", Funding a Revolution: Government Support for Computing Research, National Academy Press, ISBN 978-0-309-06278-7, OCLC 246584055
  • Newell, Allen; Simon, H. A. (1963), "GPS: A Program that Simulates Human Thought", in Feigenbaum, E.A.; Feldman, J. (eds.), Computers and Thought, New York: McGraw-Hill, ISBN 978-0-262-56092-4, OCLC 246968117
  • Newquist, HP (1994), The Brain Makers: Genius, Ego, And Greed in the Quest For Machines That Think, New York: Macmillan/SAMS, ISBN 978-0-9885937-1-8, OCLC 313139906
  • Nick, Martin (2005), Al Jazari: The Ingenious 13th Century Muslim Mechanic, Al Shindagah, retrieved 16 October 2008.*
  • O'Connor, Kathleen Malone (1994), The alchemical creation of life (takwin) and other concepts of Genesis in medieval Islam, University of Pennsylvania, pp. 1–435, retrieved 10 January 2007
  • Olsen, Stefanie (10 May 2004), Newsmaker: Google's man behind the curtain, CNET, retrieved 17 October 2008.
  • Olsen, Stefanie (18 August 2006), Spying an intelligent search engine, CNET, retrieved 17 October 2008.
  • Pearl, J. (1988), Probabilistic Reasoning in Intelligent Systems: Networks of Plausible Inference, San Mateo, California: Morgan Kaufmann, ISBN 978-1-55860-479-7, OCLC 249625842.
  • Russell, Stuart J.; Norvig, Peter (2003), Artificial Intelligence: A Modern Approach (2nd ed.), Upper Saddle River, New Jersey: Prentice Hall, ISBN 0-13-790395-2.
  • Poole, David; Mackworth, Alan; Goebel, Randy (1998), Computational Intelligence: A Logical Approach, Oxford University Press., ISBN 978-0-19-510270-3.
  • Samuel, Arthur L. (July 1959), "Some studies in machine learning using the game of checkers", IBM Journal of Research and Development, 3 (3): 210–219, CiteSeerX 10.1.1.368.2254, doi:10.1147/rd.33.0210, S2CID 2126705, retrieved 20 August 2007.
  • Searle, John (1980), , Behavioral and Brain Sciences, 3 (3): 417–457, doi:10.1017/S0140525X00005756, archived from the original on 10 December 2007, retrieved 13 May 2009.
  • Simon, H. A.; Newell, Allen (1958), "Heuristic Problem Solving: The Next Advance in Operations Research", Operations Research, 6: 1–10, doi:10.1287/opre.6.1.1.
  • Simon, H. A. (1965), The Shape of Automation for Men and Management, New York: Harper & Row.
  • Skillings, Jonathan (2006), Newsmaker: Getting machines to think like us, CNET, retrieved 8 October 2008.
  • Tascarella, Patty (14 August 2006), "Robotics firms find fundraising struggle, with venture capital shy", Pittsburgh Business Times, retrieved 15 March 2016.
  • Turing, Alan (1936–37), "On Computable Numbers, with an Application to the Entscheidungsproblem", Proceedings of the London Mathematical Society, 2, s2-42 (42): 230–265, doi:10.1112/plms/s2-42.1.230, S2CID 73712, retrieved 8 October 2008.
  • Turing, Alan (October 1950), "Computing Machinery and Intelligence", Mind, LIX (236): 433–460, doi:10.1093/mind/LIX.236.433, ISSN 0026-4423.
  • Turkle, Sherry (1984). The second self: computers and the human spirit. Simon and Schuster. ISBN 978-0-671-46848-4. OCLC 895659909.
  • Wason, P. C.; Shapiro, D. (1966). "Reasoning". In Foss, B. M. (ed.). New horizons in psychology. Harmondsworth: Penguin. Retrieved 18 November 2019.
  • Weizenbaum, Joseph (1976), Computer Power and Human Reason, W.H. Freeman & Company, ISBN 978-0-14-022535-8, OCLC 10952283.

history, artificial, intelligence, also, timeline, artificial, intelligence, progress, artificial, intelligence, history, artificial, intelligence, began, antiquity, with, myths, stories, rumors, artificial, beings, endowed, with, intelligence, consciousness, . See also Timeline of artificial intelligence and Progress in artificial intelligence The history of artificial intelligence AI began in antiquity with myths stories and rumors of artificial beings endowed with intelligence or consciousness by master craftsmen The seeds of modern AI were planted by philosophers who attempted to describe the process of human thinking as the mechanical manipulation of symbols This work culminated in the invention of the programmable digital computer in the 1940s a machine based on the abstract essence of mathematical reasoning This device and the ideas behind it inspired a handful of scientists to begin seriously discussing the possibility of building an electronic brain Alan Turing was the first person to carry out substantial research in the field that he called Machine Intelligence 1 The field of AI research was founded at a workshop held on the campus of Dartmouth College USA during the summer of 1956 2 Those who attended would become the leaders of AI research for decades Many of them predicted that a machine as intelligent as a human being would exist in no more than a generation and they were given millions of dollars to make this vision come true 3 Eventually it became obvious that researchers had grossly underestimated the difficulty of the project 4 In 1974 in response to the criticism from James Lighthill and ongoing pressure from congress the U S and British Governments stopped funding undirected research into artificial intelligence and the difficult years that followed would later be known as an AI winter Seven years later a visionary initiative by the Japanese Government inspired governments and industry to provide AI with billions of dollars but by the late 1980s the investors became disillusioned and withdrew funding again Investment and interest in AI boomed in the 2020s when machine learning was successfully applied to many problems in academia and industry due to new methods the application of powerful computer hardware and the collection of immense data sets Contents 1 Precursors 1 1 Mythical fictional and speculative precursors 1 1 1 Myth and legend 1 1 2 Medieval legends of artificial beings 1 1 3 Modern fiction 1 1 4 Automata 1 2 Formal reasoning 1 3 Computer science 2 Birth of Machine Intelligence Before 1956 2 1 Cybernetics and early neural networks 2 2 Turing Test 2 3 Game AI 2 4 Symbolic reasoning and the Logic Theorist 3 Birth of Artificial Intelligence 1956 1974 3 1 Approaches 3 1 1 Reasoning as search 3 1 2 Neural networks 3 1 3 Natural language 3 1 4 Micro worlds 3 1 5 Automata 3 2 Optimism 3 3 Financing 4 First AI winter 1974 1980 4 1 Problems 4 2 End of funding 4 3 Critiques from across campus 4 4 Perceptrons and the attack on connectionism 4 5 Logic at Stanford CMU and Edinburgh 4 6 MIT s anti logic approach 5 Boom 1980 1987 5 1 Rise of expert systems 5 2 Knowledge revolution 5 3 Money returns Fifth Generation project 5 4 Revival of neural networks 6 Bust second AI winter 1987 1993 6 1 AI winter 6 2 Nouvelle AI and embodied reason 7 AI 1993 2011 7 1 Milestones and Moore s law 7 2 Intelligent agents 7 3 Probabilistic reasoning and greater rigor 7 4 AI behind the scenes 8 Deep learning big data and artificial general intelligence 2011 present 8 1 Deep learning 8 2 Big Data 8 3 Large language models 9 See also 10 Notes 11 ReferencesPrecursors editMythical fictional and speculative precursors edit Myth and legend edit In Greek mythology Talos was a giant constructed of bronze who acted as guardian for the island of Crete He would throw boulders at the ships of invaders and would complete 3 circuits around the island s perimeter daily 5 According to pseudo Apollodorus Bibliotheke Hephaestus forged Talos with the aid of a cyclops and presented the automaton as a gift to Minos 6 In the Argonautica Jason and the Argonauts defeated him by way of a single plug near his foot which once removed allowed the vital ichor to flow out from his body and left him inanimate 7 Pygmalion was a legendary king and sculptor of Greek mythology famously represented in Ovid s Metamorphoses In the 10th book of Ovid s narrative poem Pygmalion becomes disgusted with women when he witnesses the way in which the Propoetides prostitute themselves 8 Despite this he makes offerings at the temple of Venus asking the goddess to bring to him a woman just like a statue he carved Medieval legends of artificial beings edit nbsp Depiction of a homunculus from Goethe s FaustIn Of the Nature of Things written by the Swiss alchemist Paracelsus he describes a procedure that he claims can fabricate an artificial man By placing the sperm of a man in horse dung and feeding it the Arcanum of Mans blood after 40 days the concoction will become a living infant 9 The earliest written account regarding golem making is found in the writings of Eleazar ben Judah of Worms in the early 13th century 10 11 During the Middle Ages it was believed that the animation of a Golem could be achieved by insertion of a piece of paper with any of God s names on it into the mouth of the clay figure 12 Unlike legendary automata like Brazen Heads 13 a Golem was unable to speak 14 Takwin the artificial creation of life was a frequent topic of Ismaili alchemical manuscripts especially those attributed to Jabir ibn Hayyan Islamic alchemists attempted to create a broad range of life through their work ranging from plants to animals 15 In Faust The Second Part of the Tragedy by Johann Wolfgang von Goethe an alchemically fabricated homunculus destined to live forever in the flask in which he was made endeavors to be born into a full human body Upon the initiation of this transformation however the flask shatters and the homunculus dies 16 Modern fiction edit Main article Artificial intelligence in fiction By the 19th century ideas about artificial men and thinking machines were developed in fiction as in Mary Shelley s Frankenstein or Karel Capek s R U R Rossum s Universal Robots 17 and speculation such as Samuel Butler s Darwin among the Machines 18 and in real world instances including Edgar Allan Poe s Maelzel s Chess Player 19 AI is common topic in science fiction through the present 20 Automata edit Main article Automaton nbsp Al Jazari s programmable automata 1206 CE Realistic humanoid automata were built by craftsman from every civilization including Yan Shi 21 Hero of Alexandria 22 Al Jazari 23 Pierre Jaquet Droz and Wolfgang von Kempelen 24 25 The oldest known automata were the sacred statues of ancient Egypt and Greece 26 The faithful believed that craftsman had imbued these figures with very real minds capable of wisdom and emotion Hermes Trismegistus wrote that by discovering the true nature of the gods man has been able to reproduce it 27 28 English scholar Alexander Neckham asserted that the Ancient Roman poet Virgil had built a palace with automaton statues 29 During the early modern period these legendary automata were said to possess the magical ability to answer questions put to them The late medieval alchemist and proto protestant Roger Bacon was purported to have fabricated a brazen head having developed a legend of having been a wizard 30 31 These legends were similar to the Norse myth of the Head of Mimir According to legend Mimir was known for his intellect and wisdom and was beheaded in the AEsir Vanir War Odin is said to have embalmed the head with herbs and spoke incantations over it such that Mimir s head remained able to speak wisdom to Odin Odin then kept the head near him for counsel 32 Formal reasoning edit Artificial intelligence is based on the assumption that the process of human thought can be mechanized The study of mechanical or formal reasoning has a long history Chinese Indian and Greek philosophers all developed structured methods of formal deduction in the first millennium BCE Their ideas were developed over the centuries by philosophers such as Aristotle who gave a formal analysis of the syllogism Euclid whose Elements was a model of formal reasoning al Khwarizmi who developed algebra and gave his name to algorithm and European scholastic philosophers such as William of Ockham and Duns Scotus 33 Spanish philosopher Ramon Llull 1232 1315 developed several logical machines devoted to the production of knowledge by logical means 34 Llull described his machines as mechanical entities that could combine basic and undeniable truths by simple logical operations produced by the machine by mechanical meanings in such ways as to produce all the possible knowledge 35 Llull s work had a great influence on Gottfried Leibniz who redeveloped his ideas 36 nbsp Gottfried Leibniz who speculated that human reason could be reduced to mechanical calculationIn the 17th century Leibniz Thomas Hobbes and Rene Descartes explored the possibility that all rational thought could be made as systematic as algebra or geometry 37 Hobbes famously wrote in Leviathan reason is nothing but reckoning 38 Leibniz envisioned a universal language of reasoning the characteristica universalis which would reduce argumentation to calculation so that there would be no more need of disputation between two philosophers than between two accountants For it would suffice to take their pencils in hand down to their slates and to say each other with a friend as witness if they liked Let us calculate 39 These philosophers had begun to articulate the physical symbol system hypothesis that would become the guiding faith of AI research In the 20th century the study of mathematical logic provided the essential breakthrough that made artificial intelligence seem plausible The foundations had been set by such works as Boole s The Laws of Thought and Frege s Begriffsschrift Building on Frege s system Russell and Whitehead presented a formal treatment of the foundations of mathematics in their masterpiece the Principia Mathematica in 1913 Inspired by Russell s success David Hilbert challenged mathematicians of the 1920s and 30s to answer this fundamental question can all of mathematical reasoning be formalized 33 His question was answered by Godel s incompleteness proof Turing s machine and Church s Lambda calculus 33 40 nbsp US Army photo of the ENIAC at the Moore School of Electrical Engineering 41 Their answer was surprising in two ways First they proved that there were in fact limits to what mathematical logic could accomplish But second and more important for AI their work suggested that within these limits any form of mathematical reasoning could be mechanized The Church Turing thesis implied that a mechanical device shuffling symbols as simple as 0 and 1 could imitate any conceivable process of mathematical deduction The key insight was the Turing machine a simple theoretical construct that captured the essence of abstract symbol manipulation 42 This invention would inspire a handful of scientists to begin discussing the possibility of thinking machines 33 43 Computer science edit Main articles History of computer hardware and History of computer science Calculating machines were designed or built in antiquity and throughout history by many people including Gottfried Leibniz 44 Joseph Marie Jacquard 45 Charles Babbage 46 Percy Ludgate 47 Leonardo Torres Quevedo 48 Vannevar Bush 49 and others Ada Lovelace speculated that Babbage s machine was a thinking or reasoning machine but warned It is desirable to guard against the possibility of exaggerated ideas that arise as to the powers of the machine 50 51 The first modern computers were the massive machines of the Second World War such as Konrad Zuse s Z3 Alan Turing s Heath Robinson and Colossus Atanasoff and Berry s and ABC and ENIAC at the University of Pennsylvania 52 ENIAC was based on the theoretical foundation laid by Alan Turing and developed by John von Neumann 53 and proved to be the most influential 52 Birth of Machine Intelligence Before 1956 edit nbsp The IBM 702 a computer used by the first generation of AI researchers In the 1940s and 50s a handful of scientists from a variety of fields mathematics psychology engineering economics and political science began to discuss the possibility of creating an artificial brain Alan Turing was the first person to carry out substantial research in the field that he called Machine Intelligence 1 The field of artificial intelligence research was founded as an academic discipline in 1956 54 Cybernetics and early neural networks edit The earliest research into thinking machines was inspired by a confluence of ideas that became prevalent in the late 1930s 1940s and early 1950s Recent research in neurology had shown that the brain was an electrical network of neurons that fired in all or nothing pulses Norbert Wiener s cybernetics described control and stability in electrical networks Claude Shannon s information theory described digital signals i e all or nothing signals Alan Turing s theory of computation showed that any form of computation could be described digitally The close relationship between these ideas suggested that it might be possible to construct an electronic brain 55 Experimental robots such as W Grey Walter s turtles and the Johns Hopkins Beast were built in the 1950s These machines did not use computers digital electronics or symbolic reasoning they were controlled entirely by analog circuitry 56 Alan Turing was thinking about machine intelligence at least as early as 1941 when he circulated a paper on machine intelligence which could be the earliest paper in the field of AI though it is now lost 1 Walter Pitts and Warren McCulloch analyzed networks of idealized artificial neurons and showed how they might perform simple logical functions in 1943 57 58 They were the first to describe what later researchers would call a neural network 59 The paper was influenced by Turing s earlier paper On Computable Numbers from 1936 using similar two state boolean neurons but was the first to apply it to neuronal function 1 One of the students inspired by Pitts and McCulloch was a young Marvin Minsky then a 24 year old graduate student In 1951 with Dean Edmonds he built the first neural net machine the SNARC 60 Minsky was to become one of the most important leaders and innovators in AI Turing Test edit The term Machine Intelligence was used by Alan Turing during his life which was later often referred to as Artificial Intelligence after his death in 1954 In 1950 Turing published a landmark paper and the best known of his papers Computing Machinery and Intelligence in which he speculated about the possibility of creating machines that think and the paper introduced his concept of what is now known as the Turing test to the general public 61 He noted that thinking is difficult to define and devised his famous Turing Test 62 If a machine could carry on a conversation over a teleprinter that was indistinguishable from a conversation with a human being then it was reasonable to say that the machine was thinking This simplified version of the problem allowed Turing to argue convincingly that a thinking machine was at least plausible and the paper answered all the most common objections to the proposition 63 The Turing Test was the first serious proposal in the philosophy of artificial intelligence Then followed three radio broadcasts on AI by Turing the lectures Intelligent Machinery A Heretical Theory Can Digital Computers Think and the panel discussion Can Automatic Calculating Machines be Said to Think By 1956 computer intelligence had been actively pursued for more than a decade in Britain the earliest AI programmes were written there in 1951 52 1 Game AI edit In 1951 using the Ferranti Mark 1 machine of the University of Manchester Christopher Strachey wrote a checkers program and Dietrich Prinz wrote one for chess 64 Arthur Samuel s checkers program the subject of his 1959 paper Some Studies in Machine Learning Using the Game of Checkers eventually achieved sufficient skill to challenge a respectable amateur 65 Game AI would continue to be used as a measure of progress in AI throughout its history Symbolic reasoning and the Logic Theorist edit When access to digital computers became possible in the middle fifties a few scientists instinctively recognized that a machine that could manipulate numbers could also manipulate symbols and that the manipulation of symbols could well be the essence of human thought This was a new approach to creating thinking machines 66 In 1955 Allen Newell and future Nobel Laureate Herbert A Simon created the Logic Theorist with help from J C Shaw The program would eventually prove 38 of the first 52 theorems in Russell and Whitehead s Principia Mathematica and find new and more elegant proofs for some 67 Simon said that they had solved the venerable mind body problem explaining how a system composed of matter can have the properties of mind 68 This was an early statement of the philosophical position John Searle would later call Strong AI that machines can contain minds just as human bodies do 69 Birth of Artificial Intelligence 1956 1974 editThe term Artificial Intelligence itself was officially introduced by John McCarthy in 1956 during the Dartmouth Workshop a pivotal event that marked the formal inception of AI as an academic discipline The primary objective of this workshop was to delve into the possibilities of creating machines capable of simulating human intelligence marking the commencement of a focused exploration into the realm of AI 70 The Dartmouth workshop of 1956 71 was organized by Marvin Minsky John McCarthy and two senior scientists Claude Shannon and Nathan Rochester of IBM The proposal for the conference included this assertion every aspect of learning or any other feature of intelligence can be so precisely described that a machine can be made to simulate it 72 The participants included Ray Solomonoff Oliver Selfridge Trenchard More Arthur Samuel Allen Newell and Herbert A Simon all of whom would create important programs during the first decades of AI research 73 At the workshop Newell and Simon debuted the Logic Theorist and McCarthy persuaded the attendees to accept Artificial Intelligence as the name of the field 74 The term Artificial Intelligence was chosen by McCarthy to avoid associations with cybernetics and the influence of Norbert Wiener 75 The 1956 Dartmouth workshop was the moment that AI gained its name its mission its first success and its major players and is widely considered the birth of AI 76 The programs developed in the years after the Dartmouth Workshop were to most people simply astonishing 77 computers were solving algebra word problems proving theorems in geometry and learning to speak English Few at the time would have believed that such intelligent behavior by machines was possible at all 78 Researchers expressed an intense optimism in private and in print predicting that a fully intelligent machine would be built in less than 20 years 79 Government agencies like DARPA poured money into the new field 80 Artificial Intelligence laboratories were set up at a number of British and US Universities in the latter 1950s and early 1960s 1 Approaches edit There were many successful programs and new directions in the late 50s and 1960s Among the most influential were these Reasoning as search edit Many early AI programs used the same basic algorithm To achieve some goal like winning a game or proving a theorem they proceeded step by step towards it by making a move or a deduction as if searching through a maze backtracking whenever they reached a dead end This paradigm was called reasoning as search 81 The principal difficulty was that for many problems the number of possible paths through the maze was simply astronomical a situation known as a combinatorial explosion Researchers would reduce the search space by using heuristics or rules of thumb that would eliminate those paths that were unlikely to lead to a solution 82 Newell and Simon tried to capture a general version of this algorithm in a program called the General Problem Solver 83 Other searching programs were able to accomplish impressive tasks like solving problems in geometry and algebra such as Herbert Gelernter s Geometry Theorem Prover 1958 and Symbolic Automatic Integrator SAINT written by Minsky s student James Slagle 1961 84 Other programs searched through goals and subgoals to plan actions like the STRIPS system developed at Stanford to control the behavior of their robot Shakey 85 Neural networks edit The McCulloch and Pitts paper 1944 inspired approaches to creating computing hardware that realizes the neural network approach to AI in hardware The most influential was the effort led by Frank Rosenblatt on building Perceptron machines 1957 1962 of up to four layers He was primarily funded by Office of Naval Research 86 Bernard Widrow and his student Ted Hoff built ADALINE 1960 and MADALINE 1962 which had up to 1000 adjustable weights 87 A group at Stanford Research Institute led by Charles A Rosen and Alfred E Ted Brain built two neural network machines named MINOS I 1960 and II 1963 mainly funded by U S Army Signal Corps MINOS II 88 had 6600 adjustable weights 89 and was controlled with an SDS 910 computer in a configuration named MINOS III 1968 which could classify symbols on army maps and recognize hand printed characters on Fortran coding sheets 90 91 92 Most of neural network research during this early period involved building and using bespoke hardware rather than simulation on digital computers The hardware diversity was particularly clear in the different technologies used in implementing the adjustable weights The perceptron machines and the SNARC used potentiometers moved by electric motors ADALINE used memistors adjusted by electroplating though they also used simulations on an IBM 1620 The MINOS machines used ferrite cores with multiple holes in them that could be individually blocked with the degree of blockage representing the weights 93 Though there were multi layered neural networks most neural networks during this period had only one layer of adjustable weights There were empirical attempts at training more than a single layer but they were unsuccessful Backpropagation did not become prevalent for neural network training until the 1980s 93 nbsp An example of a semantic networkNatural language edit An important goal of AI research is to allow computers to communicate in natural languages like English An early success was Daniel Bobrow s program STUDENT which could solve high school algebra word problems 94 A semantic net represents concepts e g house door as nodes and relations among concepts e g has a as links between the nodes The first AI program to use a semantic net was written by Ross Quillian 95 and the most successful and controversial version was Roger Schank s Conceptual dependency theory 96 Joseph Weizenbaum s ELIZA could carry out conversations that were so realistic that users occasionally were fooled into thinking they were communicating with a human being and not a program See ELIZA effect But in fact ELIZA had no idea what she was talking about She simply gave a canned response or repeated back what was said to her rephrasing her response with a few grammar rules ELIZA was the first chatterbot 97 Micro worlds edit In the late 60s Marvin Minsky and Seymour Papert of the MIT AI Laboratory proposed that AI research should focus on artificially simple situations known as micro worlds They pointed out that in successful sciences like physics basic principles were often best understood using simplified models like frictionless planes or perfectly rigid bodies Much of the research focused on a blocks world which consists of colored blocks of various shapes and sizes arrayed on a flat surface 98 This paradigm led to innovative work in machine vision by Gerald Sussman who led the team Adolfo Guzman David Waltz who invented constraint propagation and especially Patrick Winston At the same time Minsky and Papert built a robot arm that could stack blocks bringing the blocks world to life The crowning achievement of the micro world program was Terry Winograd s SHRDLU It could communicate in ordinary English sentences plan operations and execute them 99 Automata edit In Japan Waseda University initiated the WABOT project in 1967 and in 1972 completed the WABOT 1 the world s first full scale intelligent humanoid robot 100 101 or android Its limb control system allowed it to walk with the lower limbs and to grip and transport objects with hands using tactile sensors Its vision system allowed it to measure distances and directions to objects using external receptors artificial eyes and ears And its conversation system allowed it to communicate with a person in Japanese with an artificial mouth 102 103 104 Optimism edit The first generation of AI researchers made these predictions about their work 1958 H A Simon and Allen Newell within ten years a digital computer will be the world s chess champion and within ten years a digital computer will discover and prove an important new mathematical theorem 105 1965 H A Simon machines will be capable within twenty years of doing any work a man can do 106 1967 Marvin Minsky Within a generation the problem of creating artificial intelligence will substantially be solved 107 1970 Marvin Minsky in Life Magazine In from three to eight years we will have a machine with the general intelligence of an average human being 108 Financing edit In June 1963 MIT received a 2 2 million grant from the newly created Advanced Research Projects Agency later known as DARPA The money was used to fund project MAC which subsumed the AI Group founded by Minsky and McCarthy five years earlier DARPA continued to provide three million dollars a year until the 70s 109 DARPA made similar grants to Newell and Simon s program at CMU and to the Stanford AI Project founded by John McCarthy in 1963 110 Another important AI laboratory was established at Edinburgh University by Donald Michie in 1965 111 These four institutions would continue to be the main centers of AI research and funding in academia for many years 112 The money was proffered with few strings attached J C R Licklider then the director of ARPA believed that his organization should fund people not projects and allowed researchers to pursue whatever directions might interest them 113 This created a freewheeling atmosphere at MIT that gave birth to the hacker culture 114 but this hands off approach would not last First AI winter 1974 1980 editIn the 1970s AI was subject to critiques and financial setbacks AI researchers had failed to appreciate the difficulty of the problems they faced Their tremendous optimism had raised expectations impossibly high and when the promised results failed to materialize funding for AI disappeared 115 At the same time the exploration of simple single layer artificial neural networks was shut down almost completely for a decade partially due to Marvin Minsky s book emphasizing the limits of what perceptrons can do 116 Despite the difficulties with public perception of AI in the late 70s new ideas were explored in logic programming commonsense reasoning and many other areas 117 Problems edit In the early seventies the capabilities of AI programs were limited Even the most impressive could only handle trivial versions of the problems they were supposed to solve all the programs were in some sense toys 118 AI researchers had begun to run into several fundamental limits that could not be overcome in the 1970s Although some of these limits would be conquered in later decades others still stymie the field to this day 119 Limited computer power There was not enough memory or processing speed to accomplish anything truly useful For example Ross Quillian s successful work on natural language was demonstrated with a vocabulary of only twenty words because that was all that would fit in memory 120 Hans Moravec argued in 1976 that computers were still millions of times too weak to exhibit intelligence He suggested an analogy artificial intelligence requires computer power in the same way that aircraft require horsepower Below a certain threshold it s impossible but as power increases eventually it could become easy 121 With regard to computer vision Moravec estimated that simply matching the edge and motion detection capabilities of human retina in real time would require a general purpose computer capable of 109 operations second 1000 MIPS 122 As of 2011 practical computer vision applications require 10 000 to 1 000 000 MIPS By comparison the fastest supercomputer in 1976 Cray 1 retailing at 5 million to 8 million was only capable of around 80 to 130 MIPS and a typical desktop computer at the time achieved less than 1 MIPS Intractability and the combinatorial explosion In 1972 Richard Karp building on Stephen Cook s 1971 theorem showed there are many problems that can probably only be solved in exponential time in the size of the inputs Finding optimal solutions to these problems requires unimaginable amounts of computer time except when the problems are trivial This almost certainly meant that many of the toy solutions used by AI would probably never scale up into useful systems 123 Commonsense knowledge and reasoning Many important artificial intelligence applications like vision or natural language require simply enormous amounts of information about the world the program needs to have some idea of what it might be looking at or what it is talking about This requires that the program know most of the same things about the world that a child does Researchers soon discovered that this was a truly vast amount of information No one in 1970 could build a database so large and no one knew how a program might learn so much information 124 Moravec s paradox Proving theorems and solving geometry problems is comparatively easy for computers but a supposedly simple task like recognizing a face or crossing a room without bumping into anything is extremely difficult This helps explain why research into vision and robotics had made so little progress by the middle 1970s 125 The frame and qualification problems AI researchers like John McCarthy who used logic discovered that they could not represent ordinary deductions that involved planning or default reasoning without making changes to the structure of logic itself They developed new logics like non monotonic logics and modal logics to try to solve the problems 126 End of funding edit See also AI winter The agencies which funded AI research such as the British government DARPA and NRC became frustrated with the lack of progress and eventually cut off almost all funding for undirected research into AI The pattern began as early as 1966 when the ALPAC report appeared criticizing machine translation efforts After spending 20 million dollars the NRC ended all support 127 In 1973 the Lighthill report on the state of AI research in the UK criticized the utter failure of AI to achieve its grandiose objectives and led to the dismantling of AI research in that country 128 The report specifically mentioned the combinatorial explosion problem as a reason for AI s failings 129 DARPA was deeply disappointed with researchers working on the Speech Understanding Research program at CMU and canceled an annual grant of three million dollars 130 By 1974 funding for AI projects was hard to find The end of funding occurred even earlier for neural network research partly due to lack of results and partly due to competition from symbolic AI research The MINOS project ran out of funding in 1966 Rosenblatt failed to secure continued funding in the 1960s 93 Hans Moravec blamed the crisis on the unrealistic predictions of his colleagues Many researchers were caught up in a web of increasing exaggeration 131 However there was another issue since the passage of the Mansfield Amendment in 1969 DARPA had been under increasing pressure to fund mission oriented direct research rather than basic undirected research Funding for the creative freewheeling exploration that had gone on in the 60s would not come from DARPA Instead the money was directed at specific projects with clear objectives such as autonomous tanks and battle management systems 132 Critiques from across campus edit See also Philosophy of artificial intelligence Several philosophers had strong objections to the claims being made by AI researchers One of the earliest was John Lucas who argued that Godel s incompleteness theorem showed that a formal system such as a computer program could never see the truth of certain statements while a human being could 133 Hubert Dreyfus ridiculed the broken promises of the 1960s and critiqued the assumptions of AI arguing that human reasoning actually involved very little symbol processing and a great deal of embodied instinctive unconscious know how 134 135 John Searle s Chinese Room argument presented in 1980 attempted to show that a program could not be said to understand the symbols that it uses a quality called intentionality If the symbols have no meaning for the machine Searle argued then the machine can not be described as thinking 136 These critiques were not taken seriously by AI researchers often because they seemed so far off the point Problems like intractability and commonsense knowledge seemed much more immediate and serious It was unclear what difference know how or intentionality made to an actual computer program Minsky said of Dreyfus and Searle they misunderstand and should be ignored 137 Dreyfus who taught at MIT was given a cold shoulder he later said that AI researchers dared not be seen having lunch with me 138 Joseph Weizenbaum the author of ELIZA felt his colleagues treatment of Dreyfus was unprofessional and childish 139 Although he was an outspoken critic of Dreyfus positions he deliberately made it plain that theirs was not the way to treat a human being 140 Weizenbaum began to have serious ethical doubts about AI when Kenneth Colby wrote a computer program which can conduct psychotherapeutic dialogue based on ELIZA 141 Weizenbaum was disturbed that Colby saw a mindless program as a serious therapeutic tool A feud began and the situation was not helped when Colby did not credit Weizenbaum for his contribution to the program In 1976 Weizenbaum published Computer Power and Human Reason which argued that the misuse of artificial intelligence has the potential to devalue human life 142 Perceptrons and the attack on connectionism edit A perceptron was a form of neural network introduced in 1958 by Frank Rosenblatt who had been a schoolmate of Marvin Minsky at the Bronx High School of Science Like most AI researchers he was optimistic about their power predicting that perceptron may eventually be able to learn make decisions and translate languages An active research program into the paradigm was carried out throughout the 1960s but came to a sudden halt with the publication of Minsky and Papert s 1969 book Perceptrons It suggested that there were severe limitations to what perceptrons could do and that Frank Rosenblatt s predictions had been grossly exaggerated The effect of the book was devastating virtually no research at all was funded in connectionism for 10 years Of the main efforts towards neural networks Rosenblatt attempted to gather funds for building larger perceptron machines but died in a boating accident in 1971 Minsky of SNARC turned to a staunch objector to pure connectionist AI Widrow of ADALINE turned to adaptive signal processing using techniques based on the LMS algorithm The SRI group of MINOS turned to symbolic AI and robotics The main issues were lack of funding and the inability to train multilayered networks backpropagation was unknown The competition for government funding ended with the victory of symbolic AI approaches 92 93 Logic at Stanford CMU and Edinburgh edit Logic was introduced into AI research as early as 1959 by John McCarthy in his Advice Taker proposal 143 In 1963 J Alan Robinson had discovered a simple method to implement deduction on computers the resolution and unification algorithm However straightforward implementations like those attempted by McCarthy and his students in the late 1960s were especially intractable the programs required astronomical numbers of steps to prove simple theorems 144 A more fruitful approach to logic was developed in the 1970s by Robert Kowalski at the University of Edinburgh and soon this led to the collaboration with French researchers Alain Colmerauer and Philippe Roussel who created the successful logic programming language Prolog 145 Prolog uses a subset of logic Horn clauses closely related to rules and production rules that permit tractable computation Rules would continue to be influential providing a foundation for Edward Feigenbaum s expert systems and the continuing work by Allen Newell and Herbert A Simon that would lead to Soar and their unified theories of cognition 146 Critics of the logical approach noted as Dreyfus had that human beings rarely used logic when they solved problems Experiments by psychologists like Peter Wason Eleanor Rosch Amos Tversky Daniel Kahneman and others provided proof 147 McCarthy responded that what people do is irrelevant He argued that what is really needed are machines that can solve problems not machines that think as people do 148 MIT s anti logic approach edit Among the critics of McCarthy s approach were his colleagues across the country at MIT Marvin Minsky Seymour Papert and Roger Schank were trying to solve problems like story understanding and object recognition that required a machine to think like a person In order to use ordinary concepts like chair or restaurant they had to make all the same illogical assumptions that people normally made Unfortunately imprecise concepts like these are hard to represent in logic Gerald Sussman observed that using precise language to describe essentially imprecise concepts doesn t make them any more precise 149 Schank described their anti logic approaches as scruffy as opposed to the neat paradigms used by McCarthy Kowalski Feigenbaum Newell and Simon 150 In 1975 in a seminal paper Minsky noted that many of his fellow researchers were using the same kind of tool a framework that captures all our common sense assumptions about something For example if we use the concept of a bird there is a constellation of facts that immediately come to mind we might assume that it flies eats worms and so on We know these facts are not always true and that deductions using these facts will not be logical but these structured sets of assumptions are part of the context of everything we say and think He called these structures frames Schank used a version of frames he called scripts to successfully answer questions about short stories in English 151 Boom 1980 1987 editIn the 1980s a form of AI program called expert systems was adopted by corporations around the world and knowledge became the focus of mainstream AI research In those same years the Japanese government aggressively funded AI with its fifth generation computer project Another encouraging event in the early 1980s was the revival of connectionism in the work of John Hopfield and David Rumelhart Once again AI had achieved success 152 Rise of expert systems edit An expert system is a program that answers questions or solves problems about a specific domain of knowledge using logical rules that are derived from the knowledge of experts The earliest examples were developed by Edward Feigenbaum and his students Dendral begun in 1965 identified compounds from spectrometer readings MYCIN developed in 1972 diagnosed infectious blood diseases They demonstrated the feasibility of the approach 153 Expert systems restricted themselves to a small domain of specific knowledge thus avoiding the commonsense knowledge problem and their simple design made it relatively easy for programs to be built and then modified once they were in place All in all the programs proved to be useful something that AI had not been able to achieve up to this point 154 In 1980 an expert system called XCON was completed at CMU for the Digital Equipment Corporation It was an enormous success it was saving the company 40 million dollars annually by 1986 155 Corporations around the world began to develop and deploy expert systems and by 1985 they were spending over a billion dollars on AI most of it to in house AI departments 156 An industry grew up to support them including hardware companies like Symbolics and Lisp Machines and software companies such as IntelliCorp and Aion 157 Knowledge revolution edit The power of expert systems came from the expert knowledge they contained They were part of a new direction in AI research that had been gaining ground throughout the 70s AI researchers were beginning to suspect reluctantly for it violated the scientific canon of parsimony that intelligence might very well be based on the ability to use large amounts of diverse knowledge in different ways 158 writes Pamela McCorduck T he great lesson from the 1970s was that intelligent behavior depended very much on dealing with knowledge sometimes quite detailed knowledge of a domain where a given task lay 159 Knowledge based systems and knowledge engineering became a major focus of AI research in the 1980s 160 The 1980s also saw the birth of Cyc the first attempt to attack the commonsense knowledge problem directly by creating a massive database that would contain all the mundane facts that the average person knows Douglas Lenat who started and led the project argued that there is no shortcut the only way for machines to know the meaning of human concepts is to teach them one concept at a time by hand The project was not expected to be completed for many decades 161 Chess playing programs HiTech and Deep Thought defeated chess masters in 1989 Both were developed by Carnegie Mellon University Deep Thought development paved the way for Deep Blue 162 Money returns Fifth Generation project edit In 1981 the Japanese Ministry of International Trade and Industry set aside 850 million for the Fifth generation computer project Their objectives were to write programs and build machines that could carry on conversations translate languages interpret pictures and reason like human beings 163 Much to the chagrin of scruffies they chose Prolog as the primary computer language for the project 164 Other countries responded with new programs of their own The UK began the 350 million Alvey project A consortium of American companies formed the Microelectronics and Computer Technology Corporation or MCC to fund large scale projects in AI and information technology 165 166 DARPA responded as well founding the Strategic Computing Initiative and tripling its investment in AI between 1984 and 1988 167 nbsp A Hopfield net with four nodesRevival of neural networks edit In 1982 physicist John Hopfield was able to prove that a form of neural network now called a Hopfield net could learn and process information and provably converges after enough time under any fixed condition It was a breakthrough as it was previously thought that nonlinear networks would in general evolve chaotically 168 Around the same time Geoffrey Hinton and David Rumelhart popularized a method for training neural networks called backpropagation also known as the reverse mode of automatic differentiation published by Seppo Linnainmaa 1970 and applied to neural networks by Paul Werbos These two discoveries helped to revive the exploration of artificial neural networks 166 169 Starting with the 1986 publication of the Parallel Distributed Processing a two volume collection of papers edited by Rumelhart and psychologist James McClelland neural networks research gained new momentum and would become commercially successful in the 1990s applied to optical character recognition and speech recognition 166 170 The development of metal oxide semiconductor MOS very large scale integration VLSI in the form of complementary MOS CMOS technology enabled the development of practical artificial neural network technology in the 1980s A landmark publication in the field was the 1989 book Analog VLSI Implementation of Neural Systems by Carver A Mead and Mohammed Ismail 171 Bust second AI winter 1987 1993 editThe business community s fascination with AI rose and fell in the 1980s in the classic pattern of an economic bubble As dozens of companies failed the perception was that the technology was not viable 172 However the field continued to make advances despite the criticism Numerous researchers including robotics developers Rodney Brooks and Hans Moravec argued for an entirely new approach to artificial intelligence AI winter edit The term AI winter was coined by researchers who had survived the funding cuts of 1974 when they became concerned that enthusiasm for expert systems had spiraled out of control and that disappointment would certainly follow 173 Their fears were well founded in the late 1980s and early 1990s AI suffered a series of financial setbacks The first indication of a change in weather was the sudden collapse of the market for specialized AI hardware in 1987 Desktop computers from Apple and IBM had been steadily gaining speed and power and in 1987 they became more powerful than the more expensive Lisp machines made by Symbolics and others There was no longer a good reason to buy them An entire industry worth half a billion dollars was demolished overnight 174 Eventually the earliest successful expert systems such as XCON proved too expensive to maintain They were difficult to update they could not learn they were brittle i e they could make grotesque mistakes when given unusual inputs and they fell prey to problems such as the qualification problem that had been identified years earlier Expert systems proved useful but only in a few special contexts 175 In the late 1980s the Strategic Computing Initiative cut funding to AI deeply and brutally New leadership at DARPA had decided that AI was not the next wave and directed funds towards projects that seemed more likely to produce immediate results 176 By 1991 the impressive list of goals penned in 1981 for Japan s Fifth Generation Project had not been met Indeed some of them like carry on a casual conversation had not been met by 2010 177 As with other AI projects expectations had run much higher than what was actually possible 177 178 Over 300 AI companies had shut down gone bankrupt or been acquired by the end of 1993 effectively ending the first commercial wave of AI 179 In 1994 HP Newquist stated in The Brain Makers that The immediate future of artificial intelligence in its commercial form seems to rest in part on the continued success of neural networks 179 Nouvelle AI and embodied reason edit Main articles Nouvelle AI behavior based AI situated and embodied cognitive science In the late 1980s several researchers advocated a completely new approach to artificial intelligence based on robotics 180 They believed that to show real intelligence a machine needs to have a body it needs to perceive move survive and deal with the world They argued that these sensorimotor skills are essential to higher level skills like commonsense reasoning and that abstract reasoning was actually the least interesting or important human skill see Moravec s paradox They advocated building intelligence from the bottom up 181 The approach revived ideas from cybernetics and control theory that had been unpopular since the sixties Another precursor was David Marr who had come to MIT in the late 1970s from a successful background in theoretical neuroscience to lead the group studying vision He rejected all symbolic approaches both McCarthy s logic and Minsky s frames arguing that AI needed to understand the physical machinery of vision from the bottom up before any symbolic processing took place Marr s work would be cut short by leukemia in 1980 182 In his 1990 paper Elephants Don t Play Chess 183 robotics researcher Rodney Brooks took direct aim at the physical symbol system hypothesis arguing that symbols are not always necessary since the world is its own best model It is always exactly up to date It always has every detail there is to be known The trick is to sense it appropriately and often enough 184 In the 1980s and 1990s many cognitive scientists also rejected the symbol processing model of the mind and argued that the body was essential for reasoning a theory called the embodied mind thesis 185 AI 1993 2011 editThe field of AI now more than a half a century old finally achieved some of its oldest goals It began to be used successfully throughout the technology industry although somewhat behind the scenes Some of the success was due to increasing computer power and some was achieved by focusing on specific isolated problems and pursuing them with the highest standards of scientific accountability Still the reputation of AI in the business world at least was less than pristine 186 Inside the field there was little agreement on the reasons for AI s failure to fulfill the dream of human level intelligence that had captured the imagination of the world in the 1960s Together all these factors helped to fragment AI into competing subfields focused on particular problems or approaches sometimes even under new names that disguised the tarnished pedigree of artificial intelligence 187 AI was both more cautious and more successful than it had ever been Milestones and Moore s law edit On 11 May 1997 Deep Blue became the first computer chess playing system to beat a reigning world chess champion Garry Kasparov 188 The super computer was a specialized version of a framework produced by IBM and was capable of processing twice as many moves per second as it had during the first match which Deep Blue had lost reportedly 200 000 000 moves per second 189 In 2005 a Stanford robot won the DARPA Grand Challenge by driving autonomously for 131 miles along an unrehearsed desert trail 190 Two years later a team from CMU won the DARPA Urban Challenge by autonomously navigating 55 miles in an Urban environment while adhering to traffic hazards and all traffic laws 191 In February 2011 in a Jeopardy quiz show exhibition match IBM s question answering system Watson defeated the two greatest Jeopardy champions Brad Rutter and Ken Jennings by a significant margin 192 These successes were not due to some revolutionary new paradigm but mostly on the tedious application of engineering skill and on the tremendous increase in the speed and capacity of computer by the 90s 193 In fact Deep Blue s computer was 10 million times faster than the Ferranti Mark 1 that Christopher Strachey taught to play chess in 1951 194 This dramatic increase is measured by Moore s law which predicts that the speed and memory capacity of computers doubles every two years as a result of metal oxide semiconductor MOS transistor counts doubling every two years The fundamental problem of raw computer power was slowly being overcome Intelligent agents edit A new paradigm called intelligent agents became widely accepted during the 1990s 195 Although earlier researchers had proposed modular divide and conquer approaches to AI 196 the intelligent agent did not reach its modern form until Judea Pearl Allen Newell Leslie P Kaelbling and others brought concepts from decision theory and economics into the study of AI 197 When the economist s definition of a rational agent was married to computer science s definition of an object or module the intelligent agent paradigm was complete An intelligent agent is a system that perceives its environment and takes actions which maximize its chances of success By this definition simple programs that solve specific problems are intelligent agents as are human beings and organizations of human beings such as firms The intelligent agent paradigm defines AI research as the study of intelligent agents This is a generalization of some earlier definitions of AI it goes beyond studying human intelligence it studies all kinds of intelligence 198 The paradigm gave researchers license to study isolated problems and find solutions that were both verifiable and useful It provided a common language to describe problems and share their solutions with each other and with other fields that also used concepts of abstract agents like economics and control theory It was hoped that a complete agent architecture like Newell s SOAR would one day allow researchers to build more versatile and intelligent systems out of interacting intelligent agents 197 199 Probabilistic reasoning and greater rigor edit AI researchers began to develop and use sophisticated mathematical tools more than they ever had in the past 200 There was a widespread realization that many of the problems that AI needed to solve were already being worked on by researchers in fields like mathematics electrical engineering economics or operations research The shared mathematical language allowed both a higher level of collaboration with more established and successful fields and the achievement of results which were measurable and provable AI had become a more rigorous scientific discipline Judea Pearl s influential 1988 book 201 brought probability and decision theory into AI Among the many new tools in use were Bayesian networks hidden Markov models information theory stochastic modeling and classical optimization Precise mathematical descriptions were also developed for computational intelligence paradigms like neural networks and evolutionary algorithms 202 AI behind the scenes edit Algorithms originally developed by AI researchers began to appear as parts of larger systems AI had solved a lot of very difficult problems 203 and their solutions proved to be useful throughout the technology industry 204 such as data mining industrial robotics logistics 205 speech recognition 206 banking software 207 medical diagnosis 207 and Google s search engine 208 The field of AI received little or no credit for these successes in the 1990s and early 2000s Many of AI s greatest innovations have been reduced to the status of just another item in the tool chest of computer science 209 Nick Bostrom explains A lot of cutting edge AI has filtered into general applications often without being called AI because once something becomes useful enough and common enough it s not labeled AI anymore 210 Many researchers in AI in the 1990s deliberately called their work by other names such as informatics knowledge based systems cognitive systems or computational intelligence In part this may have been because they considered their field to be fundamentally different from AI but also the new names help to procure funding In the commercial world at least the failed promises of the AI Winter continued to haunt AI research into the 2000s as the New York Times reported in 2005 Computer scientists and software engineers avoided the term artificial intelligence for fear of being viewed as wild eyed dreamers 211 212 213 214 Deep learning big data and artificial general intelligence 2011 present editThis section needs to be updated Please help update this article to reflect recent events or newly available information July 2023 In the first decades of the 21st century access to large amounts of data known as big data cheaper and faster computers and advanced machine learning techniques were successfully applied to many problems throughout the economy In fact McKinsey Global Institute estimated in their famous paper Big data The next frontier for innovation competition and productivity that by 2009 nearly all sectors in the US economy had at least an average of 200 terabytes of stored data By 2016 the market for AI related products hardware and software reached more than 8 billion dollars and the New York Times reported that interest in AI had reached a frenzy 215 The applications of big data began to reach into other fields as well such as training models in ecology 216 and for various applications in economics 217 Advances in deep learning particularly deep convolutional neural networks and recurrent neural networks drove progress and research in image and video processing text analysis and even speech recognition 218 The first global AI Safety Summit was held in Bletchley Park in November 2023 to discuss the near and far term risks of AI and the possibility of mandatory and voluntary regulatory frameworks 219 28 countries including the United States China and the European Union issued a declaration at the start of the summit calling for international co operation to manage the challenges and risks of artificial intelligence 220 221 Deep learning edit Main article Deep learning Deep learning is a branch of machine learning that models high level abstractions in data by using a deep graph with many processing layers 218 According to the Universal approximation theorem deep ness isn t necessary for a neural network to be able to approximate arbitrary continuous functions Even so there are many problems that are common to shallow networks such as overfitting that deep networks help avoid 222 As such deep neural networks are able to realistically generate much more complex models as compared to their shallow counterparts However deep learning has problems of its own A common problem for recurrent neural networks is the vanishing gradient problem which is where gradients passed between layers gradually shrink and literally disappear as they are rounded off to zero There have been many methods developed to approach this problem such as Long short term memory units State of the art deep neural network architectures can sometimes even rival human accuracy in fields like computer vision specifically on things like the MNIST database and traffic sign recognition 223 Language processing engines powered by smart search engines can easily beat humans at answering general trivia questions such as IBM Watson and recent developments in deep learning have produced astounding results in competing with humans in things like Go and Doom which being a first person shooter game has sparked some controversy 224 225 226 227 Big Data edit Main article Big Data Big data refers to a collection of data that cannot be captured managed and processed by conventional software tools within a certain time frame It is a massive amount of decision making insight and process optimization capabilities that require new processing models In the Big Data Era written by Victor Meyer Schonberg and Kenneth Cooke big data means that instead of random analysis sample survey all data is used for analysis The 5V characteristics of big data proposed by IBM Volume Velocity Variety 228 Value 229 Veracity 230 The strategic significance of big data technology is not to master huge data information but to specialize in these meaningful data In other words if big data is likened to an industry the key to realizing profitability in this industry is to increase the process capability of the data and realize the value added of the data through processing Large language models edit Main article Large language models In 2017 the transformer architecture was proposed by Google researchers It exploits an attention mechanism and later became widely used in large language models 231 Foundation models which are large language models trained on vast quantities of unlabeled data that can be adapted to a wide range of downstream tasks began to be developed in 2018 Models such as GPT 3 released by OpenAI in 2020 and Gato released by DeepMind in 2022 have been described as important achievements of machine learning In 2023 Microsoft Research tested the GPT 4 large language model with a large variety of tasks and concluded that it could reasonably be viewed as an early yet still incomplete version of an artificial general intelligence AGI system 232 See also editHistory of artificial neural networks History of knowledge representation and reasoning History of natural language processing Outline of artificial intelligence Progress in artificial intelligence Timeline of artificial intelligence Timeline of machine learningNotes edit a b c d e f Copeland J Ed 2004 The Essential Turing the ideas that gave birth to the computer age Oxford Clarendon Press ISBN 0 19 825079 7 Kaplan Andreas Haenlein Michael 2019 Siri Siri in my hand Who s the fairest in the land On the interpretations illustrations and implications of artificial intelligence Business Horizons 62 15 25 doi 10 1016 j bushor 2018 08 004 S2CID 158433736 Newquist 1994 pp 143 156 Newquist 1994 pp 144 152 The Talos episode in Argonautica 4 Bibliotheke 1 9 26 Rhodios Apollonios 2007 The Argonautika Expanded Edition University of California Press p 355 ISBN 978 0 520 93439 9 OCLC 811491744 Morford Mark 2007 Classical mythology Oxford p 184 ISBN 978 0 19 085164 4 OCLC 1102437035 a href Template Cite book html title Template Cite book cite book a CS1 maint location missing publisher link Linden Stanton J 2003 The alchemy reader from Hermes Trismegistus to Isaac Newton New York Cambridge University Press pp Ch 18 ISBN 0 521 79234 7 OCLC 51210362 Kressel Matthew 1 October 2015 36 Days of Judaic Myth Day 24 The Golem of Prague Matthew Kressel Retrieved 15 March 2020 Newquist 1994 p page needed GOLEM www jewishencyclopedia com Retrieved 15 March 2020 Newquist 1994 p 38 Sanhedrin 65b www sefaria org Retrieved 15 March 2020 O Connor Kathleen Malone 1994 The alchemical creation of life takwin and other concepts of Genesis in medieval Islam Dissertations Available from ProQuest 1 435 Goethe Johann Wolfgang von 1890 Faust a tragedy Translated in the original metres by Bayard Taylor Authorised ed published by special arrangement with Mrs Bayard Taylor With a biographical introd London Ward Lock McCorduck 2004 pp 17 25 Butler 1863 Newquist 1994 p 65 Cave Stephen Dihal Kanta 2019 Hopes and fears for intelligent machines in fiction and reality Nature Machine Intelligence 1 2 74 78 doi 10 1038 s42256 019 0020 9 ISSN 2522 5839 S2CID 150700981 Needham 1986 p 53 McCorduck 2004 p 6 Nick 2005 McCorduck 2004 p 17 Levitt 2000 Newquist 1994 p 30 Quoted in McCorduck 2004 p 8 Crevier 1993 p 1 and McCorduck 2004 pp 6 9 discusses sacred statues Other important automata were built by Haroun al Rashid McCorduck 2004 p 10 Jacques de Vaucanson Newquist 1994 p 40 McCorduck 2004 p 16 and Leonardo Torres y Quevedo McCorduck 2004 pp 59 62 Cave S Dihal K Dillon S 2020 AI Narratives A History of Imaginative Thinking about Intelligent Machines Oxford University Press p 56 ISBN 978 0 19 884666 6 Retrieved 2 May 2023 Butler E M Eliza Marian 1948 The myth of the magus London Cambridge University Press ISBN 0 521 22564 7 OCLC 5063114 Porterfield A 2006 The Protestant Experience in America American religious experience Greenwood Press p 136 ISBN 978 0 313 32801 5 Retrieved 15 May 2023 Hollander Lee M 1964 Heimskringla history of the kings of Norway Austin Published for the American Scandinavian Foundation by the University of Texas Press ISBN 0 292 73061 6 OCLC 638953 a b c d Berlinski 2000 Cfr Carreras Artau Tomas y Joaquin Historia de la filosofia espanola Filosofia cristiana de los siglos XIII al XV Madrid 1939 Volume I Bonner Anthonny The Art and Logic of Ramon Llull A User s Guide Brill 2007 Anthony Bonner ed Doctor Illuminatus A Ramon Llull Reader Princeton University 1985 Vid Llull s Influence The History of Lullism at 57 71 17th century mechanism and AI McCorduck 2004 pp 37 46 Russell amp Norvig 2003 p 6 Buchanan 2005 p 53 Hobbes and AI McCorduck 2004 p 42 Hobbes 1651 chapter 5 Leibniz and AI McCorduck 2004 p 41 Russell amp Norvig 2003 p 6 Berlinski 2000 p 12 Buchanan 2005 p 53 The Lambda calculus was especially important to AI since it was an inspiration for Lisp the most important programming language used in AI Crevier 1993 pp 190 196 61 The original photo can be seen in the article Rose Allen April 1946 Lightning Strikes Mathematics Popular Science 83 86 Retrieved 15 April 2012 Newquist 1994 p 56 The Turing machine McCorduck 2004 pp 63 64 Crevier 1993 pp 22 24 Russell amp Norvig 2003 p 8 and see Turing 1936 37 Couturat 1901 Russell amp Norvig 2021 p 15 Russell amp Norvig 2021 p 15 Newquist 1994 p 67 Randall 1982 p 4 5 Byrne 2012 Mulvihill 2012 Randall 1982 p 6 11 13 Quevedo 1914 Quevedo 1915 Randall 1982 pp 13 16 17 Quoted in Russell amp Norvig 2021 p 15 Menabrea amp Lovelace 1843 a b Russell amp Norvig 2021 p 14 McCorduck 2004 pp 76 80 Kaplan Andreas Artificial Intelligence Business and Civilization Our Fate Made in Machines Retrieved 11 March 2022 McCorduck 2004 pp 51 57 80 107 Crevier 1993 pp 27 32 Russell amp Norvig 2003 pp 15 940 Moravec 1988 p 3 Cordeschi 2002 Chap 5 McCorduck 2004 p 98 Crevier 1993 pp 27 28 Russell amp Norvig 2003 pp 15 940 Moravec 1988 p 3 Cordeschi 2002 Chap 5 McCulloch Warren S Pitts Walter 1 December 1943 A logical calculus of the ideas immanent in nervous activity Bulletin of Mathematical Biophysics 5 4 115 133 doi 10 1007 BF02478259 ISSN 1522 9602 Piccinini Gualtiero 1 August 2004 The First Computational Theory of Mind and Brain A Close Look at Mcculloch and Pitts s Logical Calculus of Ideas Immanent in Nervous Activity Synthese 141 2 175 215 doi 10 1023 B SYNT 0000043018 52445 3e ISSN 1573 0964 S2CID 10442035 McCorduck 2004 pp 51 57 88 94 Crevier 1993 p 30 Russell amp Norvig 2003 p 15 16 Cordeschi 2002 Chap 5 and see also McCullough amp Pitts 1943 McCorduck 2004 p 102 Crevier 1993 pp 34 35 and Russell amp Norvig 2003 p 17 McCorduck 2004 pp 70 72 Crevier 1993 p 22 25 Russell amp Norvig 2003 pp 2 3 and 948 Haugeland 1985 pp 6 9 Cordeschi 2002 pp 170 176 See also Turing 1950 Newquist 1994 pp 92 98 Russell amp Norvig 2003 p 948 claim that Turing answered all the major objections to AI that have been offered in the years since the paper appeared See A Brief History of Computing at AlanTuring net Schaeffer Jonathan One Jump Ahead Challenging Human Supremacy in Checkers 1997 2009 Springer ISBN 978 0 387 76575 4 Chapter 6 McCorduck 2004 pp 137 170 Crevier 1993 pp 44 47 McCorduck 2004 pp 123 125 Crevier 1993 pp 44 46 and Russell amp Norvig 2003 p 17 Quoted in Crevier 1993 p 46 and Russell amp Norvig 2003 p 17 Russell amp Norvig 2003 p 947 952 Chatterjee Sheshadri N S Sreenivasulu Hussain Zahid 1 January 2021 Evolution of artificial intelligence and its impact on human rights from sociolegal perspective International Journal of Law and Management 64 2 184 205 doi 10 1108 IJLMA 06 2021 0156 ISSN 1754 243X McCorduck 2004 pp 111 136 Crevier 1993 pp 49 51 and Russell amp Norvig 2003 p 17 Newquist 1994 pp 91 112 See McCarthy et al 1955 Also see Crevier 1993 p 48 where Crevier states the proposal later became known as the physical symbol systems hypothesis The physical symbol system hypothesis was articulated and named by Newell and Simon in their paper on GPS Newell amp Simon 1963 It includes a more specific definition of a machine as an agent that manipulates symbols See the philosophy of artificial intelligence McCorduck 2004 pp 129 130 discusses how the Dartmouth conference alumni dominated the first two decades of AI research calling them the invisible college I won t swear and I hadn t seen it before McCarthy told Pamela McCorduck in 1979 McCorduck 2004 p 114 However McCarthy also stated unequivocally I came up with the term in a CNET interview Skillings 2006 McCarthy John 1988 Review of The Question of Artificial Intelligence Annals of the History of Computing 10 3 224 229 collected in McCarthy John 1996 10 Review of The Question of Artificial Intelligence Defending AI Research A Collection of Essays and Reviews CSLI p 73 O ne of the reasons for inventing the term artificial intelligence was to escape association with cybernetics Its concentration on analog feedback seemed misguided and I wished to avoid having either to accept Norbert not Robert Wiener as a guru or having to argue with him Crevier 1993 pp 49 writes the conference is generally recognized as the official birthdate of the new science Russell and Norvig write it was astonishing whenever a computer did anything remotely clever Russell amp Norvig 2003 p 18 Crevier 1993 pp 52 107 Moravec 1988 p 9 and Russell amp Norvig 2003 p 18 21 McCorduck 2004 p 218 Newquist 1994 pp 91 112 Crevier 1993 pp 108 109 and Russell amp Norvig 2003 p 21 Crevier 1993 pp 52 107 Moravec 1988 p 9 Means ends analysis reasoning as search McCorduck 2004 pp 247 248 Russell amp Norvig 2003 pp 59 61 Heuristic McCorduck 2004 p 246 Russell amp Norvig 2003 pp 21 22 GPS McCorduck 2004 pp 245 250 Crevier 1993 p GPS Russell amp Norvig 2003 p GPS Crevier 1993 pp 51 58 65 66 and Russell amp Norvig 2003 pp 18 19 McCorduck 2004 pp 268 271 Crevier 1993 pp 95 96 Newquist 1994 pp 148 156 Moravec 1988 pp 14 15 Rosenblatt Frank Principles of neurodynamics Perceptrons and the theory of brain mechanisms Vol 55 Washington DC Spartan books 1962 Widrow B Lehr M A September 1990 30 years of adaptive neural networks perceptron Madaline and backpropagation Proceedings of the IEEE 78 9 1415 1442 doi 10 1109 5 58323 Rosen Charles A Nils J Nilsson and Milton B Adams A research and development program in applications of intelligent automata to reconnaissance phase I Proposal for Research SRI No ESU 65 1 8 January 1965 Nilsson Nils J The SRI Artificial Intelligence Center A Brief History Artificial Intelligence Center SRI International 1984 Hart Peter E Nilsson Nils J Perrault Ray Mitchell Tom Kulikowski Casimir A Leake David B 15 March 2003 In Memoriam Charles Rosen Norman Nielsen and Saul Amarel AI Magazine 24 1 6 6 doi 10 1609 aimag v24i1 1683 ISSN 2371 9621 Nilsson Nils J 2009 Section 4 2 Neural Networks The Quest for Artificial Intelligence Cambridge Cambridge University Press doi 10 1017 cbo9780511819346 ISBN 978 0 521 11639 8 a b Nielson Donald L 1 January 2005 Chapter 4 The Life and Times of a Successful SRI Laboratory Artificial Intelligence and Robotics PDF A HERITAGE OF INNOVATION SRI s First Half Century 1st ed SRI International ISBN 978 0 9745208 0 3 a b c d Olazaran Rodriguez Jose Miguel A historical sociology of neural network research PhD Dissertation University of Edinburgh 1991 See especially Chapter 2 and 3 McCorduck 2004 p 286 Crevier 1993 pp 76 79 Russell amp Norvig 2003 p 19 Crevier 1993 pp 79 83 Crevier 1993 pp 164 172 McCorduck 2004 pp 291 296 Crevier 1993 pp 134 139 McCorduck 2004 pp 299 305 Crevier 1993 pp 83 102 Russell amp Norvig 2003 p 19 and Copeland 2000 McCorduck 2004 pp 300 305 Crevier 1993 pp 84 102 Russell amp Norvig 2003 p 19 Humanoid History WABOT Zeghloul Said Laribi Med Amine Gazeau Jean Pierre 21 September 2015 Robotics and Mechatronics Proceedings of the 4th IFToMM International Symposium on Robotics and Mechatronics Springer ISBN 9783319223681 via Google Books Historical Android Projects androidworld com Robots From Science Fiction to Technological Revolution page 130 Duffy Vincent G 19 April 2016 Handbook of Digital Human Modeling Research for Applied Ergonomics and Human Factors Engineering CRC Press ISBN 9781420063523 via Google Books Simon amp Newell 1958 p 7 8 quoted in Crevier 1993 p 108 See also Russell amp Norvig 2003 p 21 Simon 1965 p 96 quoted in Crevier 1993 p 109 Minsky 1967 p 2 quoted in Crevier 1993 p 109 Minsky strongly believes he was misquoted See McCorduck 2004 pp 272 274 Crevier 1993 p 96 and Darrach 1970 Crevier 1993 pp 64 65 Crevier 1993 p 94 Howe 1994 McCorduck 2004 p 131 Crevier 1993 p 51 McCorduck also notes that funding was mostly under the direction of alumni of the Dartmouth workshop of 1956 Crevier 1993 p 65 Crevier 1993 pp 68 71 and Turkle 1984 Crevier 1993 pp 100 144 and Russell amp Norvig 2003 pp 21 22 McCorduck 2004 pp 104 107 Crevier 1993 pp 102 105 Russell amp Norvig 2003 p 22 Crevier 1993 pp 163 196 Crevier 1993 p 146 Russell amp Norvig 2003 pp 20 21Newquist 1994 pp 336 Crevier 1993 pp 146 148 see also Buchanan 2005 p 56 Early programs were necessarily limited in scope by the size and speed of memory Moravec 1976 McCarthy has always disagreed with Moravec back to their early days together at SAIL He states I would say that 50 years ago the machine capability was much too small but by 30 years ago machine capability wasn t the real problem in a CNET interview Skillings 2006 Hans Moravec ROBOT Mere Machine to Transcendent Mind Russell amp Norvig 2003 pp 9 21 22 and Lighthill 1973 McCorduck 2004 pp 300 amp 421 Crevier 1993 pp 113 114 Moravec 1988 p 13 Lenat amp Guha 1989 Introduction Russell amp Norvig 2003 p 21 McCorduck 2004 p 456 Moravec 1988 pp 15 16 McCarthy amp Hayes 1969 Crevier 1993 pp 117 119 McCorduck 2004 pp 280 281 Crevier 1993 p 110 Russell amp Norvig 2003 p 21 and NRC 1999 under Success in Speech Recognition Crevier 1993 p 117 Russell amp Norvig 2003 p 22 Howe 1994 and see also Lighthill 1973 Russell amp Norvig 2003 p 22 Lighthill 1973 John McCarthy wrote in response that the combinatorial explosion problem has been recognized in AI from the beginning in Review of Lighthill report Crevier 1993 pp 115 116 on whom this account is based Other views include McCorduck 2004 pp 306 313 and NRC 1999 under Success in Speech Recognition Crevier 1993 p 115 Moravec explains Their initial promises to DARPA had been much too optimistic Of course what they delivered stopped considerably short of that But they felt they couldn t in their next proposal promise less than in the first one so they promised more NRC 1999 under Shift to Applied Research Increases Investment While the autonomous tank was a failure the battle management system called DART proved to be enormously successful saving billions in the first Gulf War repaying the investment and justifying the DARPA s pragmatic policy at least as far as DARPA was concerned Lucas and Penrose critique of AI Crevier 1993 p 22 Russell amp Norvig 2003 pp 949 950 Hofstadter 1999 pp 471 477 and see Lucas 1961 Know how is Dreyfus term Dreyfus makes a distinction between knowing how and knowing that a modern version of Heidegger s distinction of ready to hand and present at hand Dreyfus amp Dreyfus 1986 Dreyfus critique of artificial intelligence McCorduck 2004 pp 211 239 Crevier 1993 pp 120 132 Russell amp Norvig 2003 pp 950 952 and see Dreyfus 1965 Dreyfus 1972 Dreyfus amp Dreyfus 1986 Searle s critique of AI McCorduck 2004 pp 443 445 Crevier 1993 pp 269 271 Russell amp Norvig 2003 pp 958 960 and see Searle 1980 Quoted in Crevier 1993 p 143 Quoted in Crevier 1993 p 122 Newquist 1994 pp 276 I became the only member of the AI community to be seen eating lunch with Dreyfus And I deliberately made it plain that theirs was not the way to treat a human being Joseph Weizenbaum quoted in Crevier 1993 p 123 Colby Watt amp Gilbert 1966 p 148 Weizenbaum referred to this text in Weizenbaum 1976 pp 5 6 Colby and his colleagues later also developed chatterbot like computer simulations of paranoid processes PARRY to make intelligible paranoid processes in explicit symbol processing terms Colby 1974 p 6 Weizenbaum s critique of AI McCorduck 2004 pp 356 373 Crevier 1993 pp 132 144 Russell amp Norvig 2003 p 961 and see Weizenbaum 1976 McCorduck 2004 p 51 Russell amp Norvig 2003 pp 19 23 McCorduck 2004 p 51 Crevier 1993 pp 190 192 Crevier 1993 pp 193 196 Crevier 1993 pp 145 149 258 63 Wason amp Shapiro 1966 showed that people do poorly on completely abstract problems but if the problem is restated to allow the use of intuitive social intelligence performance dramatically improves See Wason selection task Kahneman Slovic amp Tversky 1982 have shown that people are terrible at elementary problems that involve uncertain reasoning See list of cognitive biases for several examples Eleanor Rosch s work is described in Lakoff 1987 An early example of McCathy s position was in the journal Science where he said This is AI so we don t care if it s psychologically real Kolata 1982 and he recently reiterated his position at the AI 50 conference where he said Artificial intelligence is not by definition simulation of human intelligence Maker 2006 Crevier 1993 pp 175 Neat vs scruffy McCorduck 2004 pp 421 424 who picks up the state of the debate in 1984 Crevier 1993 pp 168 who documents Schank s original use of the term Another aspect of the conflict was called the procedural declarative distinction but did not prove to be influential in later AI research McCorduck 2004 pp 305 306 Crevier 1993 pp 170 173 246 and Russell amp Norvig 2003 p 24 Minsky s frame paper Minsky 1974 Newquist 1994 pp 189 192 McCorduck 2004 pp 327 335 Dendral Crevier 1993 pp 148 159 Newquist 1994 p 271 Russell amp Norvig 2003 pp 22 23 Crevier 1993 pp 158 159 and Russell amp Norvig 2003 p 23 24 Crevier 1993 p 198 Newquist 1994 pp 259 McCorduck 2004 pp 434 435 Crevier 1993 pp 161 162 197 203 Newquist 1994 pp 275 and Russell amp Norvig 2003 p 24 McCorduck 2004 p 299 McCorduck 2004 pp 421 Knowledge revolution McCorduck 2004 pp 266 276 298 300 314 421 Newquist 1994 pp 255 267 Russell amp Norvig 2003 pp 22 23 Cyc McCorduck 2004 p 489 Crevier 1993 pp 239 243 Newquist 1994 pp 431 455 Russell amp Norvig 2003 p 363 365 and Lenat amp Guha 1989 Chess Checkmate PDF Archived from the original PDF on 8 October 2007 Retrieved 1 September 2007 McCorduck 2004 pp 436 441 Newquist 1994 pp 231 240 Crevier 1993 pp 211 Russell amp Norvig 2003 p 24 and see also Feigenbaum amp McCorduck 1983 Crevier 1993 pp 195 Crevier 1993 pp 240 a b c Russell amp Norvig 2003 p 25 McCorduck 2004 pp 426 432 NRC 1999 under Shift to Applied Research Increases Investment Sejnowski Terrence J 23 October 2018 The Deep Learning Revolution 1st ed Cambridge Massachusetts London England The MIT Press pp 93 94 ISBN 978 0 262 03803 4 Crevier 1993 pp 214 215 Crevier 1993 pp 215 216 Mead Carver A Ismail Mohammed 8 May 1989 Analog VLSI Implementation of Neural Systems PDF The Kluwer International Series in Engineering and Computer Science Vol 80 Norwell MA Kluwer Academic Publishers doi 10 1007 978 1 4613 1639 8 ISBN 978 1 4613 1639 8 Newquist 1994 pp 501 Crevier 1993 pp 203 AI winter was first used as the title of a seminar on the subject for the Association for the Advancement of Artificial Intelligence Newquist 1994 pp 359 379 McCorduck 2004 p 435 Crevier 1993 pp 209 210 McCorduck 2004 p 435 who cites institutional reasons for their ultimate failure Newquist 1994 pp 258 283 who cites limited deployment within corporations Crevier 1993 pp 204 208 who cites the difficulty of truth maintenance i e learning and updating Lenat amp Guha 1989 Introduction who emphasizes the brittleness and the inability to handle excessive qualification McCorduck 2004 pp 430 431 a b McCorduck 2004 p 441 Crevier 1993 p 212 McCorduck writes Two and a half decades later we can see that the Japanese didn t quite meet all of those ambitious goals Newquist 1994 pp 476 a b Newquist 1994 pp 440 McCorduck 2004 pp 454 462 Moravec 1988 p 20 writes I am confident that this bottom up route to artificial intelligence will one date meet the traditional top down route more than half way ready to provide the real world competence and the commonsense knowledge that has been so frustratingly elusive in reasoning programs Fully intelligent machines will result when the metaphorical golden spike is driven uniting the two efforts Crevier 1993 pp 183 190 Brooks Robert A 1990 Elephants Don t Play Chess PDF Robotics and Autonomous Systems 6 1 2 3 15 doi 10 1016 S0921 8890 05 80025 9 Brooks 1990 p 3 See for example Lakoff amp Johnson 1999 Newquist 1994 pp 511 McCorduck 2004 p 424 discusses the fragmentation and the abandonment of AI s original goals McCorduck 2004 pp 480 483 Deep Blue IBM Research Retrieved 10 September 2010 DARPA Grand Challenge home page Archived from the original on 31 October 2007 Welcome Archived from the original on 5 March 2014 Retrieved 25 October 2011 Markoff John 16 February 2011 On Jeopardy Watson Win Is All but Trivial The New York Times Kurzweil 2005 p 274 writes that the improvement in computer chess according to common wisdom is governed only by the brute force expansion of computer hardware Cycle time of Ferranti Mark 1 was 1 2 milliseconds which is arguably equivalent to about 833 flops Deep Blue ran at 11 38 gigaflops and this does not even take into account Deep Blue s special purpose hardware for chess Very approximately these differ by a factor of 107 McCorduck 2004 pp 471 478 Russell amp Norvig 2003 p 55 where they write The whole agent view is now widely accepted in the field The intelligent agent paradigm is discussed in major AI textbooks such as Russell amp Norvig 2003 pp 32 58 968 972 Poole Mackworth amp Goebel 1998 pp 7 21 Luger amp Stubblefield 2004 pp 235 240 Carl Hewitt s Actor model anticipated the modern definition of intelligent agents Hewitt Bishop amp Steiger 1973 Both John Doyle Doyle 1983 and Marvin Minsky s popular classic The Society of Mind Minsky 1986 used the word agent Other modular proposals included Rodney Brook s subsumption architecture object oriented programming and others a b Russell amp Norvig 2003 pp 27 55 This is how the most widely accepted textbooks of the 21st century define artificial intelligence See Russell amp Norvig 2003 p 32 and Poole Mackworth amp Goebel 1998 p 1 McCorduck 2004 p 478 McCorduck 2004 pp 486 487 Russell amp Norvig 2003 pp 25 26 Pearl 1988 Russell amp Norvig 2003 p 25 26 See Applications of artificial intelligence Computer science NRC 1999 under Artificial Intelligence in the 90s and Kurzweil 2005 p 264 Russell amp Norvig 2003 p 28 For the new state of the art in AI based speech recognition see The Economist 2007 a b AI inspired systems were already integral to many everyday technologies such as internet search engines bank software for processing transactions and in medical diagnosis Nick Bostrom quoted in CNN 2006 Olsen 2004 Olsen 2006 McCorduck 2004 p 423 Kurzweil 2005 p 265 Hofstadter 1999 p 601 Newquist 1994 pp 445 CNN 2006 Markoff 2005 The Economist 2007 Tascarella 2006 Newquist 1994 pp 532 Steve Lohr 17 October 2016 IBM Is Counting on Its Bet on Watson and Paying Big Money for It New York Times Hampton Stephanie E Strasser Carly A Tewksbury Joshua J Gram Wendy K Budden Amber E Batcheller Archer L Duke Clifford S Porter John H 1 April 2013 Big data and the future of ecology Frontiers in Ecology and the Environment 11 3 156 162 doi 10 1890 120103 ISSN 1540 9309 How Big Data is Changing Economies Becker Friedman Institute bfi uchicago edu Archived from the original on 18 June 2018 Retrieved 9 June 2017 a b LeCun Yann Bengio Yoshua Hinton Geoffrey 2015 Deep learning Nature 521 7553 436 444 Bibcode 2015Natur 521 436L doi 10 1038 nature14539 PMID 26017442 S2CID 3074096 Milmo Dan 3 November 2023 Hope or Horror The great AI debate dividing its pioneers The Guardian Weekly pp 10 12 The Bletchley Declaration by Countries Attending the AI Safety Summit 1 2 November 2023 GOV UK 1 November 2023 Archived from the original on 1 November 2023 Retrieved 2 November 2023 Countries agree to safe and responsible development of frontier AI in landmark Bletchley Declaration GOV UK Press release Archived from the original on 1 November 2023 Retrieved 1 November 2023 Baral Chitta Fuentes Olac Kreinovich Vladik June 2015 Why Deep Neural Networks A Possible Theoretical Explanation Departmental Technical Reports Cs Retrieved 9 June 2017 Ciregan D Meier U Schmidhuber J June 2012 Multi column deep neural networks for image classification 2012 IEEE Conference on Computer Vision and Pattern Recognition pp 3642 3649 arXiv 1202 2745 Bibcode 2012arXiv1202 2745C CiteSeerX 10 1 1 300 3283 doi 10 1109 cvpr 2012 6248110 ISBN 978 1 4673 1228 8 S2CID 2161592 Markoff John 16 February 2011 On Jeopardy Watson Win Is All but Trivial The New York Times ISSN 0362 4331 Retrieved 10 June 2017 AlphaGo Mastering the ancient game of Go with Machine Learning Research Blog Retrieved 10 June 2017 Innovations of AlphaGo DeepMind DeepMind Retrieved 10 June 2017 University Carnegie Mellon Computer Out Plays Humans in Doom CMU News Carnegie Mellon University www cmu edu Retrieved 10 June 2017 Laney Doug 2001 3D data management Controlling data volume velocity and variety META Group Research Note 6 70 Marr Bernard 6 March 2014 Big Data The 5 Vs Everyone Must Know Goes Paulo B 2014 Design science research in top information systems journals MIS Quarterly Management Information Systems 38 1 Murgia Madhumita 23 July 2023 Transformers the Google scientists who pioneered an AI revolution www ft com Retrieved 10 December 2023 Bubeck Sebastien Chandrasekaran Varun Eldan Ronen Gehrke Johannes Horvitz Eric Kamar Ece Lee Peter Lee Yin Tat Li Yuanzhi Lundberg Scott Nori Harsha Palangi Hamid Ribeiro Marco Tulio Zhang Yi 22 March 2023 Sparks of Artificial General Intelligence Early experiments with GPT 4 arXiv 2303 12712 cs CL References editThis article contains a list that has not been sorted Specifically it does not follow the Manual of Style for lists of works often though not always due to being in reverse chronological order See MOS LISTSORT for more information Please improve this article if you can July 2023 Russell Stuart J Norvig Peter 2021 Artificial Intelligence A Modern Approach 4th ed Hoboken Pearson ISBN 978 0134610993 LCCN 20190474 Couturat Louis 1901 La Logique de Leibniz Byrne J G 8 December 2012 The John Gabriel Byrne Computer Science Collection PDF Archived from the original on 16 April 2019 Retrieved 8 August 2019 Mulvihill Mary 17 October 2012 Ingenious Ireland Quevedo L Torres Quevedo 1914 Revista de la Academia de Ciencias Exacta Ensayos sobre Automatica Su definicion Extension teorica de sus aplicaciones vol 12 pp 391 418 Quevedo L Torres Quevedo 1915 Revue Generale des Sciences Pures et Appliquees Essais sur l Automatique Sa definition Etendue theorique de ses applications vol 2 pp 601 611 Randall Brian 1982 From Analytical Engine to Electronic Digital Computer The Contributions of Ludgate Torres and Bush fano co uk retrieved 29 October 2018 Berlinski David 2000 The Advent of the Algorithm Harcourt Books ISBN 978 0 15 601391 8 OCLC 46890682 Buchanan Bruce G Winter 2005 A Very Brief History of Artificial Intelligence PDF AI Magazine pp 53 60 archived from the original PDF on 26 September 2007 retrieved 30 August 2007 Butler Samuel 13 June 1863 Darwin Among the Machines The Press Christchurch New Zealand retrieved 10 October 2008 Colby Kenneth M Watt James B Gilbert John P 1966 A Computer Method of Psychotherapy Preliminary Communication The Journal of Nervous and Mental Disease vol 142 no 2 pp 148 152 doi 10 1097 00005053 196602000 00005 PMID 5936301 S2CID 36947398 Colby Kenneth M September 1974 Ten Criticisms of Parry PDF Stanford Artificial Intelligence Laboratory REPORT NO STAN CS 74 457 retrieved 17 June 2018 AI set to exceed human brain power CNN com 26 July 2006 retrieved 16 October 2007 Copeland Jack 2000 Micro World AI retrieved 8 October 2008 Cordeschi Roberto 2002 The Discovery of the Artificial Dordrecht Kluwer Crevier Daniel 1993 AI The Tumultuous Search for Artificial Intelligence New York NY BasicBooks ISBN 0 465 02997 3 Darrach Brad 20 November 1970 Meet Shaky the First Electronic Person Life Magazine pp 58 68 Doyle J 1983 What is rational psychology Toward a modern mental philosophy AI Magazine vol 4 no 3 pp 50 53 Dreyfus Hubert 1965 Alchemy and AI RAND Corporation Memo Dreyfus Hubert 1972 What Computers Can t Do New York MIT Press ISBN 978 0 06 090613 9 OCLC 5056816 Dreyfus Hubert Dreyfus Stuart 1986 Mind over Machine The Power of Human Intuition and Expertise in the Era of the Computer Oxford UK Blackwell ISBN 978 0 02 908060 3 Retrieved 22 August 2020 The Economist 7 June 2007 Are You Talking to Me The Economist retrieved 16 October 2008 Feigenbaum Edward A McCorduck Pamela 1983 The Fifth Generation Artificial Intelligence and Japan s Computer Challenge to the World Michael Joseph ISBN 978 0 7181 2401 4 Haugeland John 1985 Artificial Intelligence The Very Idea Cambridge Mass MIT Press ISBN 978 0 262 08153 5 Hawkins Jeff Blakeslee Sandra 2004 On Intelligence New York NY Owl Books ISBN 978 0 8050 7853 4 OCLC 61273290 Hebb D O 1949 The Organization of Behavior New York Wiley ISBN 978 0 8058 4300 2 OCLC 48871099 Hewitt Carl Bishop Peter Steiger Richard 1973 A Universal Modular Actor Formalism for Artificial Intelligence PDF IJCAI archived from the original PDF on 29 December 2009 Hobbes Thomas 1651 Leviathan Hofstadter Douglas 1999 1979 Godel Escher Bach an Eternal Golden Braid Basic Books ISBN 978 0 465 02656 2 OCLC 225590743 Howe J November 1994 Artificial Intelligence at Edinburgh University a Perspective retrieved 30 August 2007 Kahneman Daniel Slovic D Tversky Amos 1982 Judgment under uncertainty Heuristics and biases Science New York Cambridge University Press 185 4157 1124 1131 Bibcode 1974Sci 185 1124T doi 10 1126 science 185 4157 1124 ISBN 978 0 521 28414 1 PMID 17835457 S2CID 143452957 Kaplan Andreas Haenlein Michael 2018 Siri Siri in my Hand who s the Fairest in the Land On the Interpretations Illustrations and Implications of Artificial Intelligence Business Horizons 62 15 25 doi 10 1016 j bushor 2018 08 004 S2CID 158433736 Kolata G 1982 How can computers get common sense Science 217 4566 1237 1238 Bibcode 1982Sci 217 1237K doi 10 1126 science 217 4566 1237 PMID 17837639 Kurzweil Ray 2005 The Singularity is Near Viking Press ISBN 978 0 14 303788 0 OCLC 71826177 Lakoff George 1987 Women Fire and Dangerous Things What Categories Reveal About the Mind University of Chicago Press ISBN 978 0 226 46804 4 Lakoff G Johnson M 1999 Philosophy in the flesh The embodied mind and its challenge to western thought Basic Books ISBN 978 0 465 05674 3 Lenat Douglas Guha R V 1989 Building Large Knowledge Based Systems Addison Wesley ISBN 978 0 201 51752 1 OCLC 19981533 Levitt Gerald M 2000 The Turk Chess Automaton Jefferson N C McFarland ISBN 978 0 7864 0778 1 Lighthill Professor Sir James 1973 Artificial Intelligence A General Survey Artificial Intelligence a paper symposium Science Research Council Lucas John 1961 Minds Machines and Godel Philosophy 36 XXXVI 112 127 doi 10 1017 S0031819100057983 S2CID 55408480 archived from the original on 19 August 2007 retrieved 15 October 2008 Luger George Stubblefield William 2004 Artificial Intelligence Structures and Strategies for Complex Problem Solving 5th ed Benjamin Cummings ISBN 978 0 8053 4780 7 Retrieved 17 December 2019 Maker Meg Houston 2006 AI 50 AI Past Present Future Dartmouth College archived from the original on 8 October 2008 retrieved 16 October 2008 a href Template Citation html title Template Citation citation a CS1 maint location missing publisher link Markoff John 14 October 2005 Behind Artificial Intelligence a Squadron of Bright Real People The New York Times retrieved 16 October 2008 McCarthy John Minsky Marvin Rochester Nathan Shannon Claude 31 August 1955 A Proposal for the Dartmouth Summer Research Project on Artificial Intelligence archived from the original on 30 September 2008 retrieved 16 October 2008 McCarthy John Hayes P J 1969 Some philosophical problems from the standpoint of artificial intelligence in Meltzer B J Mitchie Donald eds Machine Intelligence 4 Edinburgh University Press pp 463 502 retrieved 16 October 2008 McCorduck Pamela 2004 Machines Who Think 2nd ed Natick MA A K Peters Ltd ISBN 978 1 56881 205 2 OCLC 52197627 McCullough W S Pitts W 1943 A logical calculus of the ideas immanent in nervous activity Bulletin of Mathematical Biophysics 5 4 115 127 doi 10 1007 BF02478259 Menabrea Luigi Federico Lovelace Ada 1843 Sketch of the Analytical Engine Invented by Charles Babbage Scientific Memoirs 3 retrieved 29 August 2008 With notes upon the Memoir by the Translator Minsky Marvin 1967 Computation Finite and Infinite Machines Englewood Cliffs N J Prentice Hall Minsky Marvin Papert Seymour 1969 Perceptrons An Introduction to Computational Geometry The MIT Press ISBN 978 0 262 63111 2 OCLC 16924756 Minsky Marvin 1974 A Framework for Representing Knowledge retrieved 16 October 2008 Minsky Marvin 1986 The Society of Mind Simon and Schuster ISBN 978 0 671 65713 0 OCLC 223353010 Minsky Marvin 2001 It s 2001 Where Is HAL Dr Dobb s Technetcast retrieved 8 August 2009 Moravec Hans 1976 The Role of Raw Power in Intelligence archived from the original on 3 March 2016 retrieved 16 October 2008 Moravec Hans 1988 Mind Children Harvard University Press ISBN 978 0 674 57618 6 OCLC 245755104 Needham Joseph 1986 Science and Civilization in China Volume 2 Taipei Caves Books Ltd NRC 1999 Developments in Artificial Intelligence Funding a Revolution Government Support for Computing Research National Academy Press ISBN 978 0 309 06278 7 OCLC 246584055 Newell Allen Simon H A 1963 GPS A Program that Simulates Human Thought in Feigenbaum E A Feldman J eds Computers and Thought New York McGraw Hill ISBN 978 0 262 56092 4 OCLC 246968117 Newquist HP 1994 The Brain Makers Genius Ego And Greed in the Quest For Machines That Think New York Macmillan SAMS ISBN 978 0 9885937 1 8 OCLC 313139906 Nick Martin 2005 Al Jazari The Ingenious 13th Century Muslim Mechanic Al Shindagah retrieved 16 October 2008 O Connor Kathleen Malone 1994 The alchemical creation of life takwin and other concepts of Genesis in medieval Islam University of Pennsylvania pp 1 435 retrieved 10 January 2007 Olsen Stefanie 10 May 2004 Newsmaker Google s man behind the curtain CNET retrieved 17 October 2008 Olsen Stefanie 18 August 2006 Spying an intelligent search engine CNET retrieved 17 October 2008 Pearl J 1988 Probabilistic Reasoning in Intelligent Systems Networks of Plausible Inference San Mateo California Morgan Kaufmann ISBN 978 1 55860 479 7 OCLC 249625842 Russell Stuart J Norvig Peter 2003 Artificial Intelligence A Modern Approach 2nd ed Upper Saddle River New Jersey Prentice Hall ISBN 0 13 790395 2 Poole David Mackworth Alan Goebel Randy 1998 Computational Intelligence A Logical Approach Oxford University Press ISBN 978 0 19 510270 3 Samuel Arthur L July 1959 Some studies in machine learning using the game of checkers IBM Journal of Research and Development 3 3 210 219 CiteSeerX 10 1 1 368 2254 doi 10 1147 rd 33 0210 S2CID 2126705 retrieved 20 August 2007 Searle John 1980 Minds Brains and Programs Behavioral and Brain Sciences 3 3 417 457 doi 10 1017 S0140525X00005756 archived from the original on 10 December 2007 retrieved 13 May 2009 Simon H A Newell Allen 1958 Heuristic Problem Solving The Next Advance in Operations Research Operations Research 6 1 10 doi 10 1287 opre 6 1 1 Simon H A 1965 The Shape of Automation for Men and Management New York Harper amp Row Skillings Jonathan 2006 Newsmaker Getting machines to think like us CNET retrieved 8 October 2008 Tascarella Patty 14 August 2006 Robotics firms find fundraising struggle with venture capital shy Pittsburgh Business Times retrieved 15 March 2016 Turing Alan 1936 37 On Computable Numbers with an Application to the Entscheidungsproblem Proceedings of the London Mathematical Society 2 s2 42 42 230 265 doi 10 1112 plms s2 42 1 230 S2CID 73712 retrieved 8 October 2008 Turing Alan October 1950 Computing Machinery and Intelligence Mind LIX 236 433 460 doi 10 1093 mind LIX 236 433 ISSN 0026 4423 Turkle Sherry 1984 The second self computers and the human spirit Simon and Schuster ISBN 978 0 671 46848 4 OCLC 895659909 Wason P C Shapiro D 1966 Reasoning In Foss B M ed New horizons in psychology Harmondsworth Penguin Retrieved 18 November 2019 Weizenbaum Joseph 1976 Computer Power and Human Reason W H Freeman amp Company ISBN 978 0 14 022535 8 OCLC 10952283 Retrieved from https en wikipedia org w index php title History of artificial intelligence amp oldid 1189941494, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.