fbpx
Wikipedia

Symbol grounding problem

The symbol grounding problem is a concept in the fields of artificial intelligence, cognitive science, philosophy of mind, and semantics. It addresses the challenge of connecting symbols, such as words or abstract representations, to the real-world objects or concepts they refer to. In essence, it is about how symbols acquire meaning in a way that is tied to the physical world. It is concerned with how it is that words (symbols in general) get their meanings,[1] and hence is closely related to the problem of what meaning itself really is. The problem of meaning is in turn related to the problem of how it is that mental states are meaningful, and hence to the problem of consciousness: what is the connection between certain physical systems and the contents of subjective experiences.

Definitions edit

The symbol grounding problem edit

According to his 1990 paper, Stevan Harnad implicitly expresses a few other definitions of the symbol grounding problem:[2]

  1. The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols..."
  2. The symbol grounding problem is the problem of how "...the meanings of the meaningless symbol tokens, manipulated solely on the basis of their (arbitrary) shapes..." can be grounded "...in anything but other meaningless symbols."
  3. "...the symbol grounding problem is referred to as the problem of intrinsic meaning (or 'intentionality') in Searle's (1980) celebrated 'Chinese Room Argument'"
  4. The symbol grounding problem is the problem of how you can "...ever get off the symbol/symbol merry-go-round..."

To answer the question of whether or not groundedness is a necessary condition for meaning, a formulation of the symbol grounding problem is required: The symbol grounding problem is the problem of how to make the "...semantic interpretation of a formal symbol system..." "... intrinsic to the system, rather than just parasitic on the meanings in our heads..." "...in anything but other meaningless symbols".[2]

Symbol system edit

According to his 1990 paper, Harnad lays out the definition of a "symbol system" relative to his defined symbol grounding problem. As defined by Harnad, a "symbol system" is "...a set of arbitrary 'physical tokens' scratches on paper, holes on a tape, events in a digital computer, etc. that are ... manipulated on the basis of 'explicit rules' that are ... likewise physical tokens and strings of tokens."[2]

Formality of symbols edit

As Harnad describes that the symbol grounding problem is exemplified in John R. Searle's Chinese Room argument,[3] the definition of "formal" in relation to formal symbols relative to a formal symbol system may be interpreted from John R. Searle's 1980 article "Minds, brains, and programs", whereby the Chinese Room argument is described in that article:

[...] all that 'formal' means here is that I can identify the symbols entirely by their shapes.[4]

Background edit

Referents edit

A referent is the thing that a word or phrase refers to as distinguished from the word's meaning.[5] This is most clearly illustrated using the proper names of concrete individuals, but it is also true of names of kinds of things and of abstract properties: (1) "Tony Blair", (2) "the prime minister of the UK during the year 2004", and (3) "Cherie Blair's husband" all have the same referent, but not the same meaning.

Referential process edit

In the 19th century, philosopher Charles Sanders Peirce suggested what some[who?] think is a similar model: according to his triadic sign model, meaning requires (1) an interpreter, (2) a sign or representamen, (3) an object, and is (4) the virtual product of an endless regress and progress called semiosis.[6] Some [who?] have interpreted Peirce as addressing the problem of grounding, feelings, and intentionality for the understanding of semiotic processes.[7] In recent years, Peirce's theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem.[8]

Grounding process edit

There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions, via their own internal means of picking out those intended referents. So the meaning of a word on a page is "ungrounded." Nor would looking it up in a dictionary help: If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand, one would just cycle endlessly from one meaningless definition to another. One's search for meaning would be ungrounded. In contrast, the meaning of the words in one's head—those words one does understand—are "grounded".[citation needed] That mental grounding of the meanings of words mediates between the words on any external page one reads (and understands) and the external objects to which those words refer.[9][10]

Requirements for symbol grounding edit

Another symbol system is natural language.[11] On paper or in a computer, language, too, is just a formal symbol system, manipulable by rules based on the arbitrary shapes of words. But in the brain, meaningless strings of squiggles become meaningful thoughts. Harnad has suggested two properties that might be required to make this difference:[citation needed]

  1. Capacity to pick referents
  2. Consciousness

Capacity to pick out referents edit

One property that static paper or, usually, even a dynamic computer lack that the brain possesses is the capacity to pick out symbols' referents. This is what we were discussing earlier, and it is what the hitherto undefined term "grounding" refers to. A symbol system alone, whether static or dynamic, cannot have this capacity (any more than a book can), because picking out referents is not just a computational (implementation-independent) property; it is a dynamical (implementation-dependent) property.

To be grounded, the symbol system would have to be augmented with nonsymbolic, sensorimotor capacities—the capacity to interact autonomously with that world of objects, events, actions, properties and states that their symbols are systematically interpretable (by us) as referring to. It would have to be able to pick out the referents of its symbols, and its sensorimotor interactions with the world would have to fit coherently with the symbols' interpretations.

The symbols, in other words, need to be connected directly to (i.e., grounded in) their referents; the connection must not be dependent only on the connections made by the brains of external interpreters like us. Just the symbol system alone, without this capacity for direct grounding, is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts.[12]

Meaning as the ability to recognize instances (of objects) or perform actions is specifically treated in the paradigm called "Procedural Semantics", described in a number of papers including "Procedural Semantics" by Philip N. Johnson-Laird[13] and expanded by William A. Woods in "Meaning and Links".[14] A brief summary in Woods' paper reads: "The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer (or a person) can either execute or reason about. In this theory the meaning of a noun is a procedure for recognizing or generating instances, the meaning of a proposition is a procedure for determining if it is true or false, and the meaning of an action is the ability to do the action or to tell if it has been done."

Consciousness edit

The necessity of groundedness, in other words, takes us from the level of the pen-pal Turing test, which is purely symbolic (computational), to the robotic Turing test, which is hybrid symbolic/sensorimotor.[15][16] Meaning is grounded in the robotic capacity to detect, categorize, identify, and act upon the things that words and sentences refer to (see entries for Affordance and for Categorical perception). On the other hand, if the symbols (words and sentences) refer to the very bits of '0' and '1', directly connected to their electronic implementations, which a (any?) computer system can readily manipulate (thus detect, categorize, identify and act upon), then even non-robotic computer systems could be said to be "sensorimotor" and hence able to "ground" symbols in this narrow domain.

To categorize is to do the right thing with the right kind of thing. The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers. These feature-detectors must either be inborn or learned. The learning can be based on trial and error induction, guided by feedback from the consequences of correct and incorrect categorization; or, in our own linguistic species, the learning can also be based on verbal descriptions or definitions. The description or definition of a new category, however, can only convey the category and ground its name if the words in the definition are themselves already grounded category names[17] According to Harnad, ultimately grounding has to be sensorimotor, to avoid infinite regress.[18]

Harnad thus points at consciousness as a second property. The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science. But the problem of explaining how consciousness could play an "independent" role in doing so is probably insoluble, except on pain of telekinetic dualism. Perhaps symbol grounding (i.e., robotic TT capacity) is enough to ensure that conscious meaning is present, but then again, perhaps not. In either case, there is no way we can hope to be any the wiser—and that is Turing's methodological point.[19][20]

See also edit

References edit

  1. ^ Vogt, Paul. "Language evolution and robotics: issues on symbol grounding and language acquisition." Artificial cognition systems. IGI Global, 2007. 176–209.
  2. ^ a b c Harnad 1990.
  3. ^ Harnad 2001a.
  4. ^ Searle 1980.
  5. ^ Frege 1952.
  6. ^ Peirce, Charles S. The philosophy of Peirce: selected writings. New York: AMS Press, 1978.
  7. ^ Semeiosis and Intentionality T. L. Short Transactions of the Charles S. Peirce Society Vol. 17, No. 3 (Summer, 1981), pp. 197–223
  8. ^ C.S. Peirce and artificial intelligence: historical heritage and (new) theoretical stakes; Pierre Steiner; SAPERE – Special Issue on Philosophy and Theory of AI 5:265–276 (2013)
  9. ^ This is the causal, contextual theory of reference that Ogden & Richards packed in The Meaning of Meaning (1923).
  10. ^ Cf. semantic externalism as claimed in "The Meaning of 'Meaning'" of Mind, Language and Reality (1975) by Putnam who argues: "Meanings just ain't in the head." Now he and Dummett seem to favor anti-realism in favor of intuitionism, psychologism, constructivism and contextualism.
  11. ^ Fodor 1975.
  12. ^ Cangelosi & Harnad 2001.
  13. ^ Philip N. Johnson-Laird "Procedural Semantics" (Cognition, 5 (1977) 189; see http://www.nyu.edu/gsas/dept/philo/courses/mindsandmachines/Papers/procedural.pdf)
  14. ^ William A. Woods. "Meaning and Links" (AI Magazine Volume 28 Number 4 (2007); see http://www.aaai.org/ojs/index.php/aimagazine/article/view/2069/2056)
  15. ^ Harnad 2000.
  16. ^ Harnad 2007.
  17. ^ Blondin-Massé 2008.
  18. ^ Harnad 2005.
  19. ^ Harnad 2001b.
  20. ^ Harnad 2003.

Works cited edit

  • Belpaeme, Tony; Cowley, Stephen John; MacDorman, Karl F., eds. (2009). Symbol Grounding. Netherlands: John Benjamins Publishing Company. ISBN 978-9027222510.
  • Blondin-Massé, A.; et al. (18–22 August 2008). How Is Meaning Grounded in Dictionary Definitions?. TextGraphs-3 Workshop, 22nd International Conference on Computational Linguistics, Coling 2008. Manchester. arXiv:0806.3710.
  • Cangelosi, A.; Harnad, S. (2001). "The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil: Grounding Language in Perceptual Categories". Evolution of Communication. 4 (1): 117–142. doi:10.1075/eoc.4.1.07can. hdl:10026.1/3619. S2CID 15837328.
  • Fodor, J. A. (1975). The Language of Thought. New York: Thomas Y. Crowell.
  • Frege, G. (1952) [1892]. "On sense and reference". In Geach, P.; Black, M. (eds.). Translations of the Philosophical Writings of Gottlob Frege. Oxford: Blackwell.
  • Harnad, S. (1990). "The Symbol Grounding Problem". Physica D. 42 (1–3): 335–346. arXiv:cs/9906002. Bibcode:1990PhyD...42..335H. doi:10.1016/0167-2789(90)90087-6. S2CID 3204300.
  • Harnad, S. (2000). "Minds, Machines and Turing: The Indistinguishability of Indistinguishables". Journal of Logic, Language, and Information. 9 (4): 425–445. doi:10.1023/A:1008315308862. S2CID 1911720. Special Issue on "Alan Turing and Artificial Intelligence"
  • Harnad, S (2001a). "Minds, Machines and Searle II: What's Wrong and Right About Searle's Chinese Room Argument?". In Bishop, M.; Preston, J. (eds.). Essays on Searle's Chinese Room Argument. Oxford University Press.
  • Harnad, S. (2001b). "No Easy Way Out". The Sciences. 41 (2): 36–42. doi:10.1002/j.2326-1951.2001.tb03561.x.
  • Harnad, S. (2003). "Can a Machine Be Conscious? How?". Journal of Consciousness Studies. 10 (4–5): 69–75.
  • Harnad, S. (2005). "To Cognize is to Categorize: Cognition is categorization". In Lefebvre, C.; Cohen, H. (eds.). Handbook of Categorization. Elsevier.
  • Harnad, S. (2007). "The Annotation Game: On Turing (1950) on Computing, Machinery and Intelligence". In Epstein, Robert; Peters, Grace (eds.). The Turing Test Sourcebook: Philosophical and Methodological Issues in the Quest for the Thinking Computer. Kluwer.
  • Searle, John R. (1980). (PDF). Behavioral and Brain Sciences. 3 (3): 417–457. doi:10.1017/S0140525X00005756. S2CID 55303721. Archived from the original (PDF) on 23 September 2015.

Further reading edit

  • Cangelosi, A.; Greco, A.; Harnad, S. From robotic toil to symbolic theft: grounding transfer from entry-level to higher-level categories. Connection Science12(2) 143–62.
  • MacDorman, Karl F. (1999). Grounding symbols through sensorimotor integration. Journal of the Robotics Society of Japan, 17(1), 20–24. Online version
  • MacDorman, Karl F. (2007). Life after the symbol system metaphor. Interaction Studies, 8(1), 143–158. Online version
  • Pylyshyn, Z. W. (1984). Computation and Cognition. Cambridge MA: MIT/Bradford.
  • Taddeo, Mariarosaria & Floridi, Luciano (2005). The symbol grounding problem: A critical review of fifteen years of research. Journal of Experimental and Theoretical Artificial Intelligence, 17(4), 419–445. Online version
  • Turing, A. M. (1950) Computing Machinery and Intelligence. Mind 49 433–460 [Reprinted in Minds and machines. A. Anderson (ed.), Engelwood Cliffs NJ: Prentice Hall, 1964.]

symbol, grounding, problem, major, contributor, this, article, appears, have, close, connection, with, subject, require, cleanup, comply, with, wikipedia, content, policies, particularly, neutral, point, view, please, discuss, further, talk, page, september, 2. A major contributor to this article appears to have a close connection with its subject It may require cleanup to comply with Wikipedia s content policies particularly neutral point of view Please discuss further on the talk page September 2014 Learn how and when to remove this template message The symbol grounding problem is a concept in the fields of artificial intelligence cognitive science philosophy of mind and semantics It addresses the challenge of connecting symbols such as words or abstract representations to the real world objects or concepts they refer to In essence it is about how symbols acquire meaning in a way that is tied to the physical world It is concerned with how it is that words symbols in general get their meanings 1 and hence is closely related to the problem of what meaning itself really is The problem of meaning is in turn related to the problem of how it is that mental states are meaningful and hence to the problem of consciousness what is the connection between certain physical systems and the contents of subjective experiences Contents 1 Definitions 1 1 The symbol grounding problem 1 2 Symbol system 1 3 Formality of symbols 2 Background 2 1 Referents 2 2 Referential process 2 3 Grounding process 2 4 Requirements for symbol grounding 2 4 1 Capacity to pick out referents 2 4 2 Consciousness 3 See also 4 References 4 1 Works cited 5 Further readingDefinitions editThe symbol grounding problem edit According to his 1990 paper Stevan Harnad implicitly expresses a few other definitions of the symbol grounding problem 2 The symbol grounding problem is the problem of how to make the semantic interpretation of a formal symbol system intrinsic to the system rather than just parasitic on the meanings in our heads in anything but other meaningless symbols The symbol grounding problem is the problem of how the meanings of the meaningless symbol tokens manipulated solely on the basis of their arbitrary shapes can be grounded in anything but other meaningless symbols the symbol grounding problem is referred to as the problem of intrinsic meaning or intentionality in Searle s 1980 celebrated Chinese Room Argument The symbol grounding problem is the problem of how you can ever get off the symbol symbol merry go round To answer the question of whether or not groundedness is a necessary condition for meaning a formulation of the symbol grounding problem is required The symbol grounding problem is the problem of how to make the semantic interpretation of a formal symbol system intrinsic to the system rather than just parasitic on the meanings in our heads in anything but other meaningless symbols 2 Symbol system edit According to his 1990 paper Harnad lays out the definition of a symbol system relative to his defined symbol grounding problem As defined by Harnad a symbol system is a set of arbitrary physical tokens scratches on paper holes on a tape events in a digital computer etc that are manipulated on the basis of explicit rules that are likewise physical tokens and strings of tokens 2 Formality of symbols edit As Harnad describes that the symbol grounding problem is exemplified in John R Searle s Chinese Room argument 3 the definition of formal in relation to formal symbols relative to a formal symbol system may be interpreted from John R Searle s 1980 article Minds brains and programs whereby the Chinese Room argument is described in that article all that formal means here is that I can identify the symbols entirely by their shapes 4 Background editReferents edit A referent is the thing that a word or phrase refers to as distinguished from the word s meaning 5 This is most clearly illustrated using the proper names of concrete individuals but it is also true of names of kinds of things and of abstract properties 1 Tony Blair 2 the prime minister of the UK during the year 2004 and 3 Cherie Blair s husband all have the same referent but not the same meaning Referential process edit In the 19th century philosopher Charles Sanders Peirce suggested what some who think is a similar model according to his triadic sign model meaning requires 1 an interpreter 2 a sign or representamen 3 an object and is 4 the virtual product of an endless regress and progress called semiosis 6 Some who have interpreted Peirce as addressing the problem of grounding feelings and intentionality for the understanding of semiotic processes 7 In recent years Peirce s theory of signs has been rediscovered by an increasing number of artificial intelligence researchers in the context of symbol grounding problem 8 Grounding process edit There would be no connection at all between written symbols and any intended referents if there were no minds mediating those intentions via their own internal means of picking out those intended referents So the meaning of a word on a page is ungrounded Nor would looking it up in a dictionary help If one tried to look up the meaning of a word one did not understand in a dictionary of a language one did not already understand one would just cycle endlessly from one meaningless definition to another One s search for meaning would be ungrounded In contrast the meaning of the words in one s head those words one does understand are grounded citation needed That mental grounding of the meanings of words mediates between the words on any external page one reads and understands and the external objects to which those words refer 9 10 Requirements for symbol grounding edit Another symbol system is natural language 11 On paper or in a computer language too is just a formal symbol system manipulable by rules based on the arbitrary shapes of words But in the brain meaningless strings of squiggles become meaningful thoughts Harnad has suggested two properties that might be required to make this difference citation needed Capacity to pick referents Consciousness Capacity to pick out referents edit This section has multiple issues Please help improve it or discuss these issues on the talk page Learn how and when to remove these template messages This section is written like a personal reflection personal essay or argumentative essay that states a Wikipedia editor s personal feelings or presents an original argument about a topic Please help improve it by rewriting it in an encyclopedic style September 2023 Learn how and when to remove this template message This section includes a list of general references but it lacks sufficient corresponding inline citations Please help to improve this section by introducing more precise citations March 2013 Learn how and when to remove this template message Learn how and when to remove this template message One property that static paper or usually even a dynamic computer lack that the brain possesses is the capacity to pick out symbols referents This is what we were discussing earlier and it is what the hitherto undefined term grounding refers to A symbol system alone whether static or dynamic cannot have this capacity any more than a book can because picking out referents is not just a computational implementation independent property it is a dynamical implementation dependent property To be grounded the symbol system would have to be augmented with nonsymbolic sensorimotor capacities the capacity to interact autonomously with that world of objects events actions properties and states that their symbols are systematically interpretable by us as referring to It would have to be able to pick out the referents of its symbols and its sensorimotor interactions with the world would have to fit coherently with the symbols interpretations The symbols in other words need to be connected directly to i e grounded in their referents the connection must not be dependent only on the connections made by the brains of external interpreters like us Just the symbol system alone without this capacity for direct grounding is not a viable candidate for being whatever it is that is really going on in our brains when we think meaningful thoughts 12 Meaning as the ability to recognize instances of objects or perform actions is specifically treated in the paradigm called Procedural Semantics described in a number of papers including Procedural Semantics by Philip N Johnson Laird 13 and expanded by William A Woods in Meaning and Links 14 A brief summary in Woods paper reads The idea of procedural semantics is that the semantics of natural language sentences can be characterized in a formalism whose meanings are defined by abstract procedures that a computer or a person can either execute or reason about In this theory the meaning of a noun is a procedure for recognizing or generating instances the meaning of a proposition is a procedure for determining if it is true or false and the meaning of an action is the ability to do the action or to tell if it has been done Consciousness edit This section may be unbalanced towards certain viewpoints Please improve the article or discuss the issue on the talk page December 2010 The necessity of groundedness in other words takes us from the level of the pen pal Turing test which is purely symbolic computational to the robotic Turing test which is hybrid symbolic sensorimotor 15 16 Meaning is grounded in the robotic capacity to detect categorize identify and act upon the things that words and sentences refer to see entries for Affordance and for Categorical perception On the other hand if the symbols words and sentences refer to the very bits of 0 and 1 directly connected to their electronic implementations which a any computer system can readily manipulate thus detect categorize identify and act upon then even non robotic computer systems could be said to be sensorimotor and hence able to ground symbols in this narrow domain To categorize is to do the right thing with the right kind of thing The categorizer must be able to detect the sensorimotor features of the members of the category that reliably distinguish them from the nonmembers These feature detectors must either be inborn or learned The learning can be based on trial and error induction guided by feedback from the consequences of correct and incorrect categorization or in our own linguistic species the learning can also be based on verbal descriptions or definitions The description or definition of a new category however can only convey the category and ground its name if the words in the definition are themselves already grounded category names 17 According to Harnad ultimately grounding has to be sensorimotor to avoid infinite regress 18 Harnad thus points at consciousness as a second property The problem of discovering the causal mechanism for successfully picking out the referent of a category name can in principle be solved by cognitive science But the problem of explaining how consciousness could play an independent role in doing so is probably insoluble except on pain of telekinetic dualism Perhaps symbol grounding i e robotic TT capacity is enough to ensure that conscious meaning is present but then again perhaps not In either case there is no way we can hope to be any the wiser and that is Turing s methodological point 19 20 See also editBinding problem Categorical perception Communicative action Consciousness Formal language Formal system Frame problem Hermeneutics Natural language understanding Interpretation Physical symbol system Pragmatics Semantics Semiosis Semiotics Sign Sign relation Situated cognition Syntax Turing machineReferences edit Vogt Paul Language evolution and robotics issues on symbol grounding and language acquisition Artificial cognition systems IGI Global 2007 176 209 a b c Harnad 1990 Harnad 2001a Searle 1980 Frege 1952 Peirce Charles S The philosophy of Peirce selected writings New York AMS Press 1978 Semeiosis and Intentionality T L Short Transactions of the Charles S Peirce Society Vol 17 No 3 Summer 1981 pp 197 223 C S Peirce and artificial intelligence historical heritage and new theoretical stakes Pierre Steiner SAPERE Special Issue on Philosophy and Theory of AI 5 265 276 2013 This is the causal contextual theory of reference that Ogden amp Richards packed in The Meaning of Meaning 1923 Cf semantic externalism as claimed in The Meaning of Meaning of Mind Language and Reality 1975 by Putnam who argues Meanings just ain t in the head Now he and Dummett seem to favor anti realism in favor of intuitionism psychologism constructivism and contextualism Fodor 1975 Cangelosi amp Harnad 2001 Philip N Johnson Laird Procedural Semantics Cognition 5 1977 189 see http www nyu edu gsas dept philo courses mindsandmachines Papers procedural pdf William A Woods Meaning and Links AI Magazine Volume 28 Number 4 2007 see http www aaai org ojs index php aimagazine article view 2069 2056 Harnad 2000 Harnad 2007 Blondin Masse 2008 Harnad 2005 Harnad 2001b Harnad 2003 Works cited edit Belpaeme Tony Cowley Stephen John MacDorman Karl F eds 2009 Symbol Grounding Netherlands John Benjamins Publishing Company ISBN 978 9027222510 Blondin Masse A et al 18 22 August 2008 How Is Meaning Grounded in Dictionary Definitions TextGraphs 3 Workshop 22nd International Conference on Computational Linguistics Coling 2008 Manchester arXiv 0806 3710 Cangelosi A Harnad S 2001 The Adaptive Advantage of Symbolic Theft Over Sensorimotor Toil Grounding Language in Perceptual Categories Evolution of Communication 4 1 117 142 doi 10 1075 eoc 4 1 07can hdl 10026 1 3619 S2CID 15837328 Fodor J A 1975 The Language of Thought New York Thomas Y Crowell Frege G 1952 1892 On sense and reference In Geach P Black M eds Translations of the Philosophical Writings of Gottlob Frege Oxford Blackwell Harnad S 1990 The Symbol Grounding Problem Physica D 42 1 3 335 346 arXiv cs 9906002 Bibcode 1990PhyD 42 335H doi 10 1016 0167 2789 90 90087 6 S2CID 3204300 Harnad S 2000 Minds Machines and Turing The Indistinguishability of Indistinguishables Journal of Logic Language and Information 9 4 425 445 doi 10 1023 A 1008315308862 S2CID 1911720 Special Issue on Alan Turing and Artificial Intelligence Harnad S 2001a Minds Machines and Searle II What s Wrong and Right About Searle s Chinese Room Argument In Bishop M Preston J eds Essays on Searle s Chinese Room Argument Oxford University Press Harnad S 2001b No Easy Way Out The Sciences 41 2 36 42 doi 10 1002 j 2326 1951 2001 tb03561 x Harnad S 2003 Can a Machine Be Conscious How Journal of Consciousness Studies 10 4 5 69 75 Harnad S 2005 To Cognize is to Categorize Cognition is categorization In Lefebvre C Cohen H eds Handbook of Categorization Elsevier Harnad S 2007 The Annotation Game On Turing 1950 on Computing Machinery and Intelligence In Epstein Robert Peters Grace eds The Turing Test Sourcebook Philosophical and Methodological Issues in the Quest for the Thinking Computer Kluwer Searle John R 1980 Minds brains and programs PDF Behavioral and Brain Sciences 3 3 417 457 doi 10 1017 S0140525X00005756 S2CID 55303721 Archived from the original PDF on 23 September 2015 Further reading editCangelosi A Greco A Harnad S From robotic toil to symbolic theft grounding transfer from entry level to higher level categories Connection Science12 2 143 62 MacDorman Karl F 1999 Grounding symbols through sensorimotor integration Journal of the Robotics Society of Japan 17 1 20 24 Online version MacDorman Karl F 2007 Life after the symbol system metaphor Interaction Studies 8 1 143 158 Online version Pylyshyn Z W 1984 Computation and Cognition Cambridge MA MIT Bradford Taddeo Mariarosaria amp Floridi Luciano 2005 The symbol grounding problem A critical review of fifteen years of research Journal of Experimental and Theoretical Artificial Intelligence 17 4 419 445 Online version Turing A M 1950 Computing Machinery and Intelligence Mind 49 433 460 Reprinted in Minds and machines A Anderson ed Engelwood Cliffs NJ Prentice Hall 1964 Retrieved from https en wikipedia org w index php title Symbol grounding problem amp oldid 1218578748, wikipedia, wiki, book, books, library,

article

, read, download, free, free download, mp3, video, mp4, 3gp, jpg, jpeg, gif, png, picture, music, song, movie, book, game, games.