Intentionality and Computationalism:

Minds, Machines, Searle and Harnad

Michael G. Dyer

Computer Science Department

UCLA, Los Angeles, CA 90024

(Dyer@cs.ucla.edu)

(213) 825-1322 & 206-6674

 

Abstract

Searle (1980, 1989) has produced a number of arguments purporting to show that computer programs, no matter how intelligently they may act, lack "intentionality". Recently, Harnad (1989a) has accepted Searle's arguments as having "shaken the foundations of Artificial Intelligence" (p. 5). To deal with Searle's arguments, Harnad has introduced the need for "noncomputational devices" (e.g. transducers) to realize "symbol grounding". This paper critically examines both Searle's and Harnad's arguments and concludes that the "foundations of AI" remain unchanged by these arguments, that the Turing Test remains adequate as a test of intentionality, and that the philosophical position of computationalism remains perfectly reasonable as a working hypothesis for the task of describing and embodying intentionality in brains and machines.

 

1. Introduction

One major, long-term goal of Artificial Intelligence (AI) researchers is to pass "Turing's Test" (Turing 1964), in which a group of testers (T) engage both a computer (C) and a person (P) in an extended conversation over a dual teletype-style communication medium. The conditions for the test are:

1. One teletype links C to T and the other links P to T.

2. Both C and P will attempt to convince T that each is the person.

3. Each member of T knows that one teletype link is to a computer and the other to a person, and that both C and P are trying to convince T that each is a person.

4. There is no limitation (other than time) placed on the nature of the conversations allowed.

5. C passes the test if members of T are about evenly divided on which teletype is linked to C and which to P.

Notice that condition 3 is essential, since without it, members of T may uncritically accept output from each candidate as being that of a person, or they may fail to explicitly test the limits of each candidate's cognitive capabilities.

A major, working assumption of AI is that Mind is realizable as an executing computer program. This assumption has been called both the "Strong AI" position (Searle 1980a) and the "physical symbol system hypothesis (PSSH)" (Newell 1980).

Not all cognitive scientists accept Turing's Test as a definitive task for proof of the existence of Mind, or the PSSH as a reasonable hypothesis. Over the last decade, Searle (1980a, 1982a, 1985a, 1989) has produced a number of arguments attacking the PSSH. These arguments purport to show that computer programs, while they may act as if they are intelligent, actually lack intentionality (i.e. they do not know "what they are talking about") and thus constitute simply an elaborate deception at the input/output level. Recently, Harnad (1989a) has accepted a subset of Searle's arguments as having "shaken the foundations of Artificial Intelligence" (p. 5). In response, Harnad has argued both that Turing's Test must be modified and that noncomputational devices (i.e. sensory and motor transducers) are a prerequisite for intentionality, through their role in achieving what Harnad considers a more fundamental task, that of symbol grounding (i.e. establishing a correspondence between internal representations and the real world).

This paper critically examines both Searle's and Harnad's arguments and concludes that the foundations of "Strong AI" remain unchanged by their arguments, that the Turing Test is still adequate as a test of intentionality, and that computationalism (i.e. the position that Mind, Life, and even Matter are entirely describable in terms of computations) continues to appear adequate to the task of realizing intentionality in both brains and machines.

 

2. Searle's Arguments and Rebuttals

The arguments summarized below have been put forth by Searle in (Searle 1980a,1980b, 1982a, 1982b, 1985a, 1985b, 1989). Here I have summarized these arguments in my own words. For organizational and reference purposes, I have assigned each argument both a descriptive title and a label.

2.1. The "Doing Versus Knowing" Argument (SA-1)

Humans have "intentionality". Computers do not. What is intentionality? It is whatever humans have that make them "know what they are doing". Since computers do what they do without knowing what they are doing, computers lack intentionality. For example, computers multiply 7 x 9 to get 63 without knowing that they are multiplying numbers. Humans, on the other hand, know that they are multiplying numbers; know what numbers are; know what 63 signifies, etc. In contrast, computers manipulate numbers but do not know about numbers.

Rebuttal to SA-1 (The "Sufficient Domain Knowledge" Reply): True, a normal computer does not know what it is doing with respect to numerical manipulations. However, it is possible, using current "expert systems" technology, to build an AI system, let's call it COUNT, composed of a large number of schemas or frames (Minsky 1985), connected into semantic networks via relations, constraints, and rules. Each schema would contain knowledge about the domain, i.e. about numbers; about numeric operations (such as counting, pairing, comparison, addition, etc,) and about number-related concepts, such as the difference between a numeral and its cardinality. In addition, COUNT would contain: (a) an episodic memory of events; for instance, of the last time that it counted up to a hundred, (b) facts concerning numbers, e.g. that 4 is even, and (c) of enablement conditions for numeric operations, e.g. that to count, one must place the objects counted in one-to-one correspondence with the numerals in ascending order, and so on. Schema instances would be organized in an associative memory, for content-addressable recall. COUNT would also have an English lexicon, mapping English words about the numerical domain to number-related concepts (i.e. to the network of schemas). COUNT would also have natural language understanding and generation subsystems, so that it could answer questions in English about number-related concepts, numerical operations and its past experiences with numbers.

Now, suppose we ask COUNT to multiply 7 x 9. First COUNT must recall schemas concerning the concepts MULTIPLY, and NUMBER, along with schemas for the specific concepts of 7 and 9. Once COUNT realized that the question was concerning an operation upon two particular numbers, it could decide how to answer the query. In this case, COUNT would simply recall the answer from its associative memory, based on the concepts of MULTIPLY, 7 and 9 as indices to memory. After retrieving the answer, COUNT might recall one or more related memories, perhaps of the last time it multiplied two numbers together or the time it last had a conversation concerning those numbers, or that operation, etc. COUNT would also be able to answer other questions about its "number world", such as:

What is an example of a small number?

Are there any special properties of the number 7?

What do you think of the number 63?

How can you tell even numbers from odd numbers?

etc.

Prototypes of such programs, like the one hypothesized here, have been constructed for other domains, such as the domain of script and plan-based story comprehension and question answering (Dyer 1983; Lehnert 1978; Schank and Abelson 1977; Wilensky 1983) and editorial comprehension (Alvarado 1989).

As the sophistication of COUNT's concepts and cognitive processes in the numerical domain approach that of humans, people would be less and less inclined to conclude that COUNT "doesn't know what it's doing".

Clearly, when COUNT is executing, in some sense it cannot help but do what it has been set up to do. COUNT has access only to its schemas. The internal programs that are executing are inaccessible to COUNT while at the same time essential for COUNT's performance. But, of course, the same is true for humans. Humans are just as automatic, in the sense that they cannot stop or control the firing patterns of their own neurons. Humans also have limited access to their own concepts about numbers. Although we can retrieve and manipulate concepts concerning numbers, we do not know how these concepts are represented or manipulated in the brain. But these unknown processes are every bit as automatic as an executing program, just many orders of magnitude more complex. Just as we would not be interested in talking to a hand calculator about the world of numbers (but we might enjoy talking to COUNT, depending on how sophisticated its concepts are); likewise, we would not be interested in talking to some isolated cluster of someone's neurons, since this cluster also "does not know what it is doing".

Thus, COUNT need not understand its base-level schemas, nor its own schema-construction or other schema-processing mechanisms. For COUNT to perform in an "intentional" manner; it only need have the ability to automatically construct useful schemas (e.g. during learning) and to automatically apply (access, generate deductions from, adapt, etc.) relevant schemas at appropriate times. In cases where COUNT’s schemas refer to other schemas, COUNT will "know about what it knows.". The same situation appears to be true of humans. Humans are not aware of how they represent, construct, modify or access their own forms of knowledge. Humans exhibit "knowing that they know" only to the limited extent that they happen to have acquired knowledge about their knowledge of a given domain. Likewise, expert systems exhibit "knowing that they know" to the extent that they have schemas concerning other schemas in memory. If "knowing that one knows X" is anything more than schemas referring to other schemas (along with lots of automatic, inaccessible processing mechanisms), then it is up to those in the Searle camp to specify of what this extra magical "stuff" of "intentionality" consists.

2.2. The "Chinese Box" Argument (SA-2)

This argument relies on a thought experiment in which a person P sits inside a box and reads one or more books containing program instructions. These instructions can ask P to flip to other pages of instructions, look up symbols or numbers in various tables, and perform various kinds of numerical and/or logical operations. In addition, there can be blank pages for storing and referring to the results of prior calculations. P receives Chinese characters as input from people who insert pieces of paper into a slot in the box. P then reads and carries out the program-instructions and these instructions tell P how to interpret the Chinese characters and respond with other Chinese characters. As a result, one can carry on a conversation in written Chinese with this box. However, if we ask P, in English, whether or not P knows Chinese, P will reply: " I know no Chinese, I am just mindlessly following the instructions specified in these books".

P clearly does not understand Chinese. The instructions clearly do not understand Chinese (they just reside in a static book), so where is the comprehension? According to Searle it just appears that there is comprehension. P is just acting as an interpreter of a set of instructions. Modern, program-store computers also consist of both a program and an interpreter which executes the instructions of the program. Thus, a computer built to read and answer questions in Chinese is no more understanding Chinese than P. Thus, to the extent that computers are composed of a (mindless) interpreter and a static set of instructions, they must also be only appearing to understand, but are not "really" understanding.

Rebuttal to SA-2 (The "Systems" Reply): Although P does not understand, the entire box, as a system consisting of both Searle and the instructions, does understand. We cannot expect Searle, who is acting as only one component in a two-component system, to understand Chinese, since the understanding is a consequence of the interaction of multiple components. For example, a radio is composed of various circuits, whose overall organization produces "radio-ness". We could fancifully imagine asking each part: "Are you a radio?" and each would reply "No. I'm just a capacitor (or transistor, or resistor, etc.)." Yet the radio arises as a result of the organization of these parts. Likewise, we can imagine asking each neuron in a person's brain if it knows what it's like to think a thought. Individual neurons won't even be able to understand the question, while very large numbers of interacting neurons can realize numerous cognitive tasks.

Let us use a variant of Searle's Chinese Box argument to 'convince' ourselves that a computer cannot really add two numbers, but is only appearing to do so. To show this, we take the addition circuitry of our hand calculator and replace each component with a human. Depending on the size of the numbers we want to add, we may need thousands of humans. Assume we place all of these humans in a giant football field. Each human will be told how to act, based on the actions of those humans near him. We will use a binary representation scheme to encode numbers. When a human has his hand raised, that is a "1", when lowered, a "0". By setting the hands on a row of humans, we can encode a binary number. If we need a bus of wires between, say, the registers and the arithmetic/logic circuitry, we can set up such a bus by telling certain humans, e.g. "When the person in front of you raises his hand, then so many seconds later, you raise your hand also, and then drop it." In this way we could simulate the propagation of bit patterns down a bus of wires, to other circuits (layouts of humans) in the football field. We could actually then calculate the sum of two binary numbers, as long as each human faithfully acts as each piece of a wire or other circuit component needed for our adder.

After successfully adding two binary numbers, we could now ask each human if he knows what he is doing. He will say "No. I don't know what this football field of humans is accomplishing. I am just here raising and lowering my arm according to the rules I was given." Does his response mean that addition did not occur? Of course not! The processes of addition have occurred, and in a manner isomorphic to that occurring in the hand calculator. Likewise, if we knew how neurons interact in that portion of our brains that realizes human-style numerical addition, then we could theoretically construct another "human network", composed of perhaps billions of humans (requiring a gigantic football field), where each human would play the part of one or more neurons (synapses, neurotransmitters, etc.). In this case, also, each individual human would not know what he/she is doing, but the overall operation of addition would be occurring nonetheless.

In computer science, it is not at all unusual to build a smart system out of dumb components. We should just not be confused when many overly smart components are used to build a smart system.

2.3. The "Total Interpreter" Argument (SA-3)

Argument SA-3 is an attempt by Searle to refute the "systems" reply. In this thought experiment, Searle imagines that P has memorized the instructions specified in the Chinese Box books. Upon receiving bits of paper with Chinese symbols, P is able to mentally flip through the pages of these books, performing the calculations specified, and thus carry on a conversation in Chinese. In fact, P has memorized these instructions so well that P no longer even has a sense of consciously carrying out the Chinese Box instructions, but does them subconsciously and automatically. Again, P writes down Chinese characters without having any sense of what P is doing and thus has no subjective sense of understanding Chinese.

Notice, in this thought experiment, P is no longer just acting as one component in a two-component system. P is performing all tasks of the system and acting as an interpreter of the entire [interpreter + instructions] Chinese Box system, still without experiencing understanding of Chinese.

Rebuttal to SA-3 (The "Split Levels" Reply): In SA-3, P's brain (with its memory and processing abilities) is being used to create a system capable of interpreting Chinese. To make this system's capabilities more clearly distinct from those of P, let us assume that the Chinese Box instructions create, not only Chinese comprehension but an alternate personality who holds completely different political beliefs, general world knowledge, attitudes, etc. from those of P. Thus, if P believes in democracy, then let us assume the alternate personality, CP, believes in communism. If P is ignorant about life in China, then let us assume, when P writes down Chinese characters in response to Chinese input characters, the system CP demonstrates great knowledge of Chinese culture.

When we talk in English to P, we have one sort of conversation, but when we feed written Chinese to P, we have completely different conversations (with CP). P, not understanding Chinese, of course does not know what these conversations are about. Likewise, CP may not understand English. That is, if we try to talk to CP by writing down English on scraps of paper, we get responses only from P, who can read the English.

Now, according to Searle, the fact that P does not understand Chinese (or the content of these Chinese conversations), while at the same time doing all computations involved in bringing CP into existence within P's brain, is an argument against there actually being a system CP that "really" understands, i.e. that has "intentionality". Is this a reasonable conclusion?

Let us ask CP (in Chinese) what it is like to be CP. Suppose that the instructions P is carrying out are so sophisticated that CP is able to pass the Turing Test; that is,
CP can give long discourses on what it's like to be human, to be Chinese, and to be a communist. Why should we deny that CP exists, simply because, when we talk to P, P claims no knowledge of such an entity?

Imagine, for a moment, that we find out that all of our own neurons are controlled by a tiny creature, TC, who is highly intelligent. This creature actually samples the inputs to each of our neurons, and at very high speed, decides what outputs our neurons should send to other neurons. That is, our neurons are not under the control of electro-biochemical laws, but under the control of TC. Furthermore, TC claims not to understand what it is doing. TC is actually following a big book of instructions, which explains what kinds of outputs to produce for each of our neurons, based on each neurons' inputs. Assume scientists have recently discovered this fact about our own brains and we have just read about it in Science. Now, do these facts: -- i.e. (1) that our mental existence is maintained by TC, (2) that TC is highly intelligent and can hold a conversation with us, and (3) that TC does not know what it is doing -- lead us to conclude that we do not have intentionality; that we do not understand English but are only acting as if we do? Does the existence of an intelligent subsystem, that is doing all of the necessary computations to bring us about, obviate our acceptance of ourselves as having intentionality?

The existence of multiple and interdependent systems, (referred to in computer science as virtual systems) simultaneously active within a single underlying system, is a standard notion in both the cognitive and computational sciences, e.g. (Minsky 1986). Virtual systems are usually created via time sharing (interleaving of calculations) or memory sharing (swapping memories in and out between main memory and the disk).

Consider a hypothetical AI natural language comprehension system, called NLP-L, whose domain of knowledge is the AI programming language Lisp. That is, NLP-L has knowledge about concepts involving Lisp. Assume that NLP-L itself is built out of Lisp programs that are being executed by a Lisp interpreter. Now, let us ask NLP-L something in English about Lisp:

Q: What is CAR?

NLP-L: It's a function that returns the first element in a list.

However, if we try to talk directly to the underlying Lisp interpreter (which is only designed to execute Lisp expressions), then we get garbage:

Q: What is CAR?

Lisp: "What" -- unbound variable

The kind of capability we see depends on what level or portion of a system we are communicating with. The Lisp interpreter executes Lisp but doesn't know that it is executing Lisp, so lacks intentional states with respect to Lisp. NLP-L, however, does have some intentional states (at least about Lisp). Yet NLP-L cannot exhibit its intelligent behavior without being brought into existence by the underlying execution of the Lisp interpreter.

It is easy to be confused by the existence of two separate systems, being maintained by P's single brain, B. But that is because we naively imagine that B has time to execute the computations that give rise to CP and still execute the computations that give rise to P. We image P being able to carry on a conversation with us, in English, of the sort where P every now and then performs CP-creating computations and then stops and says:

"I sure don't know what is going on when I see Chinese symbols on scraps of paper, but my brain does something and then suddenly I write down Chinese symbols. I remember when I used to more consciously carry out calculations specified in those Chinese Box books and that sure was boring work!"

But let us look at this scenario from a different perspective. Imagine that when P is resident on B, it is because P-creating computations are going on within B (i.e. the brain). But who is to say that it is "P's brain" and not actually "CP's brain"? When Chinese characters are presented to B, this brain now executes CP-creating computations. During this period (no matter how short), where is P? While CP-creating computations are being carried out by this brain, perhaps we are being told by CP (in Chinese) that it is P who lacks intentionality! That is, whose brain is it?

Let us imagine that we are performing every computation being performed by every neuron of this brain, B. We are now simulating B in its entirety. Sometimes the computations bring about P and sometimes CP. If we imagine we are just controling the firing of neurons, how will we know what is going on, even though we are performing every operation in the system? Thus, our own introspection, of what it is like to completely support all operations that give rise to intentional behavior, gives us no insight into which system is active, or whether or not the resulting systems are intentional.

We are led thus to a paradoxical situation: If I simulate you and experience being you, then, by definition of what it means to be you, I am no longer me and therefore cannot answer questions about what it's like to be me, since I am you. However, if I am still me and am simulating you, then I am not really succeeding at simulating you, since it's really me who is responding to questions about what it's like to be you. I cannot be me (and only me) at the same time that I am executing computations that bring about you. So there are three possibilities:

1. Temporally split personalities -- P and CP are being time sliced on B.

2. Spatially split personalities -- P and CP reside within different regions of B.

3. Merged personalities -- A new form of consciousness is resident.

In cases 1 and 2, the intentionality you get depends on which portion (or level) of the total system you talk to. That is, there are two distinct, intentional entities resident on the same brain. In case 3, however, we have neither P nor CP, since a merger of both is a form of consciousness (with its corresponding intelligence, knowledge, and "intentionality") that is distinct from either individual consciousness.

So, if B is carrying out CP-creating computations so automatically that we can talk to it (via the Chinese input channel), we cannot deny this intentionality just because later B is carrying out P-creating computations (just as automatically) that create a personality which denies the experience of understanding Chinese. What is confusing is that, in Searle's thought experiment, these alternate personalities switch back and forth rather rapidly, depending on the input channel (written Chinese versus spoken English) and we are naively led by Searle to believe that somehow the P personality is primary. But the brain B that is executing P-creating computations is no more (or less) primary than the same brain B executing CP-creating computations. Furthermore, when imagining carrying out (even all of) the complex computations needed to create intentional behavior, why should we be allowed to conclude that no intentionality exists, just because we cannot understand the nature of the intentionality we are bringing into existence? While "P's brain" B is doing all and only those CP-creating computations, then, for that time period, P does not exist. Only CP exists.

If Searle is simulating the Chinese personality that automatically, then, during that time period, Searle's intentionality is missing and only that of the Chinese personality exists, so of course Searle will have no insight into what it is like to understand (or be) Chinese.

My own brain right now is taken up (100% of the time, I think) with Dyer-forming computations. I would hate to have Searle deny me my experience of being conscious and "intentional" just because Searle can imagine doing the computations my brain is doing right now and then claim that he would still not feel like me. If I act as if I am conscious and can argue cogently that I am conscious, then you should accept me as being conscious, no matter what Searle thinks his introspective experience of simulating me would be like.

2.4. The "Mere Simulation" Argument (SA-4)

Consider computer simulations of a car hitting a wall, or the destruction of a building caused by a tornado. Clearly, these simulations are not the same as the real things! Likewise, the simulation of intentionality is not really manifesting intelligence. Since a computer cannot create the actual phenomenon of car accidents or tornados, it cannot create the actual phenomenon of intentionality.

Rebuttal to SA-4 (The "Information Processing" Reply): The actions performed by automobiles and tornados involve highly specific substances (e.g. air, concrete), so a simulation of a tornado on a computer, where only information about the destruction caused by the tornado is modelled, would not be equivalent to the actual tornado (with its whirling air and flying debris, etc.).

However, if thought processes consist in the manipulations of patterns of substances, then we only need to build a system capable of creating these identical patterns through its physical organization, without requiring exactly the same set of substances. If thought is information processing, then the 'simulation' of thought on a computer would be the same as thought on a brain, since the same patterns would be created (even if they were electromagnetic patterns in the computer and electrochemical patterns in the brain). Remember, the computer, when executing, is producing patterns of physical things (which could be photons, electrons or some other stuff). Right now, what differentiates an AI model running on a computer, from a human mind running on a brain, is the nature, complexity and sheer volume of brain patterns. But there is no reason to believe that this difference constitutes any fundamental sort of barrier.

Where does intelligence reside? It cannot reside solely in the patterns. It must also reside in the causal architecture that gives rise to physical embodiments of those patterns. Why is this the case? Consider a person who videotapes all of the patterns produced by some computer executing some AI system. When he plays these patterns back on a TV monitor, no one would accept that the display sequence of all of these patterns constitutes artificial intelligence, and yet the video camera, in some sense, is 'producing' the same patterns that the AI system was producing! The reason we do not accept the TV stream of patterns as artificially intelligent is the same reason we do not equate a video of a person with the actual person. Namely, the video display is lacking in the underlying physical architecture that gave rise to those patterns in the first place. Only this physical architecture can generate an open-ended set of novel patterns. So the paradigm of AI demands that the patterns be embodied in the physical world and produced by causal/physical relationships within some physical system.

A computer is just such a physical device, capable of creating novel patterns based on the causal architecture set up in its physical memory. Two computers that are the exact same model, built by the same manufacturer, are not identical when they hold different programs. The reason is that those programs are stored in each computer by making different physical changes to the memory circuits of each machine, so each machine ends up with a different physical structure, leading it to perform physically in distinct ways. That is why Newell (1980) refers to AI systems as "physical symbol systems". As long as these patterns have some embodiment and recreate the same causal architecture, the substance used to make up these patterns need not be restricted to a particular kind of substance. Ironically, Searle himself (1980a) has argued for the necessity of such a "causal architecture" while simultaneously rejecting a computer as capable of realizing "intentionality".

Consider buildings. The bricks that make up, say, a Gothic church may be made of cement, clay, plastic, ceramics, graphite, or some other substance. As long as each brick can satisfy weight bearing constraints, and as long as each type of brick can be combined to construct the same range of buildings, then what makes a building be in the style of what is called, e.g., "a Gothic church", is determined by the way in which the bricks are organized, not in the substance of which each brick in made.

If intentionality (i.e. consciousness, intelligence -- Mind in general) arises from the way in which patterns of "real stuff" interact, then the simulation of intentionality is the same as embodying intentionality, because the 'simulation' of information processing on a computer is information processing. This is the case because a computer is a physical device that manipulates patterns of "real stuff" to realize information processing.

The working assumption of AI is that Mind is the result of information processing. Given this assumption, simulations of intelligence are not "mere simulations" -- they are the real thing. Since tornados, however, as not assumed (here) to consist of information processing, a simulation of a tornado would not be a tornado. At this point, it appears that there is an accumulating body of evidence, indicating that complex, intentional behavior can be realized through information processing. It is up those who reject the "mind-is-information-processing" hypothesis to amass any kind of evidence that Mind requires something more.

3. Harnad's Arguments and Rebuttals

Harnad (1989a,b) accepts SA-3 and thus believes that computers cannot realize "intentionality". Given this state of affairs, Harnad then poses questions concerning what is essential to the notion of "intentionality" and what must be added to computers in order to obtain this "intentionality".

3.1. The "Chinese Dictionary" Argument (HA-1)

Consider a person trying to learn what Chinese symbols (words) mean by looking at a Chinese dictionary, where each Chinese symbol refers only to lists of other Chinese symbols in the same dictionary. A person will never really know what a Chinese symbol means, simply by knowing how it relates to other Chinese symbols, because none of these symbols refer to anything outside of the dictionary. Understanding the true meaning of a symbol requires symbol grounding -- i.e. setting up a physical correspondence between the symbols in the dictionary and actual objects and actions in the real world.

Rebuttal to HA-1 (The "Internal Representations" Reply): In current natural language processing (NLP) systems, words are not related directly to each other, but map instead to internal representations, which have a correspondence to one another in ways that are analogous to how the real objects (they stand for) relate to one another in the real world. This correspondence does not have to be perfect -- only good enough to allow the organism with these representations to perform in the real world, by manipulating these internal representations. If the representations (and computations over them) are in some sense analogous to real objects and actions in the real world, then the organism will have a better chance of surviving. For example, if a real chair has 4 legs, then the internal representation will include constituents that represent legs. If chairs are made of wood, and pieces of wood can become splinters in someone's skin, then there will be representations for kinds of materials, for pieces of larger objects, for pointed shapes, for penetration by pointed objects that come into contact with them, for coverings (such as skin) of the surface of objects, and so on. For these internal representations to support the kind of visualizations that humans are capable of (e.g. recalling components of an object by examining one's visual memory of an object), they will need to encode spatial dimensions, spatial trajectories and spatial interactions.

When I recall a chair, I manipulate internal representations of chairs, which contain visual, kinesthetic, and other sensory memories, along with episodic events and generalized facts or abstractions concerning chairs (e.g. when a given chair broke; that people sit on them, etc.). I use these representations to perform tasks related to indicating my understanding of chairs. If the Chinese word for "chair" maps to such internal representations, and if the system manipulates these representations in ways identical to how my brain manipulates its representations, then there would be no difference between what a system knows and understands about chairs than what I know and understand about chairs.

So while it may be true that relating Chinese symbols to one another will not allow a system to know what a chair (in Chinese) really is, there is no reason to believe that "chair" must be grounded in the real world. "Chair" need only be grounded in internal representations that allow the system to behave as if it has a memory of having seen a real chair. Such a system could then describe in detail (say, in English) what the Chinese word for "chair" signifies.

3.2. The "Transducer" Argument (HA-2)

If one accepts SA-3 as correct, then a way to deal with this argument is to find an additional, necessary component for "intentionality" and show that Searle is incapable of simulating it. This additional component consists of the sensory and motor transducers that allow a robot to interact with the real world. Now, although Searle can imagine simulating the interpreter and the Chinese Box instructions, he cannot simulate the sensory and motor transducers, since what these transducers are doing involves energy transformations that are not computational in nature. That is, an arm moving through space is not normally considered a computation, nor is the operation of a photosensitive cell which is transforming light into electrical impulses. If transducers are essential to "intentionality", then Searle can no longer argue that he is simulating the entire system, thus defeating SA-3. Conveniently, these transducers are exactly the components needed to ground the robot in the real world.

A major consequence of this argument is that the Turing Test (TT) is no longer an adequate test of intentionality, since it does not require that a robot perform sensory and motor tasks in the real world. Searle is thus correct (so Harnad's argument goes) in stating that a machine which passes Turing's Test is simply behaving as if it is intentional, but is really lacking intentionality. What is needed instead is the "Total Turing Test" (TTT), in which the computer must control a robot and the tasks demanded of it include both interactive dialog and performance of scene recognition and coordinated physical actions.

Rebuttal to HA-2 (The "Simulated Environment" Reply): First of all, if one accepts the "Split Levels" reply to SA-3, then one need not postulate an additional component, such as transducers. Leaving this response aside, however, we can still ask just how crucial are transducers to modeling intentionality and how adequate is Turing's Test.

Transducers are not critical if the robot's environment and sensory/motor behaviors are simulated in addition to the robot's brain. Suppose the robot is standing in a room and looking at a chair. The simulation now includes information (to whatever level of detail one requires) concerning the spatial and other physical properties of the chair, along with a simulation of the patterns of information that the robot's real (visual and kinesthetic) transducers would have sent as input to the rest of the robot's brain, if the robot had real transducers and were standing in a real room. As the robot's brain performs computations to move its body, the simulation will calculate how the robot's position in space has changed and then simulate the changing visual and kinesthetic patterns that the robot's brain will receive as input (identical to those patterns that the real transducers would have sent to the robot, had it really moved about in a real room). In this manner, the robot's brain can be grounded in an environment, but without requiring a real environment or the (supposedly) noncomputational features of real transducers. The simulated environment may or may not have a physics identical to the physics of the real world. But no matter what physics are simulated, the simulated robot will be grounded (and performing a TTT) in that environment, and without transducers.

How crucial is the TTT to testing "intentionality"? A more careful examination of the standard Turing Test reveals that it is adequate to the task of testing symbol grounding. Remember, the testers can ask any questions via the teletypes, including those intended to test symbol grounding. For example:

"Imagine you are holding an apple in your hand. How does it feel if you rub it against your cheek?"

"Recall when you last splashed water in the tub. You cup your hand more and hit the water faster. How does the sound change? What happens to the water?"

"I assume you are wearing clothing. Right now, pull your shirt up and hold it with your chin. Describe how much of your stomach is uncovered."

"Try to touch the left side of your face with your right hand, by first placing your right hand behind your neck and then bringing your hand around, while moving your face toward your hand. How much of your face can you touch by doing this?"

These questions require a computer to have internal representations that allow it either (a) to control an actual robot body to perform these sensory/motor tasks, (b) to simulate them in a simulated world, or (c) to imagine them via mental simulation in the robot's memory. Such tasks are a more adequate test of "intentionality" than merely observing a robot perform coordinated activities in the real world. A dog or cat can perform exquisite feats of sensory/motor behavior and yet they do not have "intentionality" (i.e. they do not know that they know). By having the robot respond in natural language, it must not only mentally relate representations of visual/kinesthetic memories and experiences, but also understand language descriptions about such experiences. Thus, although the TTT is a useful, final proof of robot coordination and mobility, the TT is perfectly adequate to the task of testing symbol grounding.

3.3. The "Noncomputational Brain" Argument (HA-3)

If one concludes that symbol grounding via transducers is a crucial element of "intentional" systems, then it becomes a crucial observation that the entire brain consists simply of layers and layers of transducers. That is, each neuron is a transducer that is essentially noncomputational in nature. The brain consists of billions of such analog and analog-to-digital devices that each performs energy transformations, which are fundamentally non-computational in nature. If we look for computations in any layer of the brain, we will not find any, only projections to other transducers. Thus, it may not be that just the input (sensory) and output (motor) layers are noncomputational transducers; the entire brain is noncomputational.

Rebuttal to HA-3 (The "Noncomputational Computer" Reply): A computer also consists of a great many transducers. Each circuit element is a transducer, that transforms energy in an analog and analog-to-digital manner. By using HA-3, one can argue that the computer itself is noncomputational, since the input (keyboard) is a transducer and it simply projects to other transducers (e.g. transistors), and so on, until projecting to the output, display device (also a transducer). But this observation is simply that the computer is a physical entity, and that, like the brain, obeys the laws of physics. It is an amazing fact that AI systems can exhibit Mind-like behavior without violating any physical laws. That is, the I/O level acts intelligently while the transducers, that make up the machine, inexorably behave according to physical laws. For this point of view, there is no magic (no dualism; no "ghost" in the machine), other than the (often counterintuitive) emergent properties of the entire system, through the interactions of its many components.

The reason we speak of a computer as "computing" is that it is manipulating patterns of physical stuff (energy/matter) in ways such that one pattern, through causal relationships, bring about other physical patterns in systematic ways. But the brain can be described as consisting of noncomputational transducers that are doing exactly the same kinds of things. Thus, the brain is as computational (or noncomputational) as is the computer, depending on how one wants to look at it.

4. Mind, Life, Matter, and Computationalism

Throughout this paper, we have assumed that a computation is the creation and manipulation of physical patterns in a physical device with a causal architecture, such that the novel patterns produced are determined by the physical patterns already resident in the device. The working assumption of "Strong AI" is that (a) Mind emerges from the brain's computations and (b) that Mind emerges in computers to the extent that they perform similar computations.

The recent disputes, between Connectionism and traditional AI (Pinker and Mehler 1988; Dyer 1988, in press; Graubard 1988; Smolensky 1988), are not disputes over the fundamental computational power of these approaches, since any neural network (whether chemically based, optically based, etc.) can be simulated by a modern computer (and therefore by a Turing Machine). Rather, they are disputes over which approaches offer the most appropriate languages and metaphors for describing cognition.

The position of computationalism is that the notion of computation is powerful enough to describe, not only Mind, but Life and Matter also. Below are two final arguments against computationalism, with rebuttals.

4.1. The "Intentionality Presumes Life" Argument (SA-5)

Digestion consists of complicated biochemicals interacting in a "wet" and "goopey" environment, totally unlike the insides of a computer. This "goopeyness" is a major reason why a computer cannot realize digestion. Likewise, the brain consists of neurotransmitters and complex biochemical materials. It is normally assumed that Mind is only loosely coupled to the living processes ongoing in the neural cells, dendritic arborizations, blood and other chemical life-support processes to the cellular structures of the brain. The assumption is that the life processes, while they keep the cells alive, are not involved in the computations that bring about Mind. What if, however, the "life-state" of each cell is intimately involved in the embodiment of Mind in the brain? What if Mind is more like digestion than like computation? In such a case, Mind would be impossible to embody in a computer.

Rebuttal to SA-5 (The "Artificial Life Correspondence" Reply):

This "cognition is digestion" argument hinges on the biochemistry that is essential in digestion, and that is assumed to be essential in the cognitive aspects of brain function. If the biochemistry of life is somehow essential to cognition, then SA-5 can be rephrased as: "A prerequisite for a system to have intentionality is that it must first be alive." Two major issues are raised by this kind of argument: "What is Life?" and "Is an intelligent-acting robot alive?"

At first glance, there appear to be two levels at which Life arises in natural systems: (a) the population level -- involving recombination, cloning, mutation, natural selection, etc., and (b) the metabolic level -- involving cellular activities, including digestion, growth, healing, embryonic development, etc. A computer-controlled metallic robot R then would be "alive", from a population point of view, only to the extent that there existed a population of related robots, capable of reproduction, mutation, and so on. Likewise, R would be "alive", from a metabolic perspective, only to the extent that there were cellular processes going on.

But defining "life" turns out to be as difficult as defining "intentionality". The last member of the Dodo species would still be considered "alive", even though a population of Dodos no longer exists. Even a metallic, computer-controlled robot can be viewed as having some kind of a metabolism, since it consumes energy and expends heat in order to sense its environment and move.

Historically, vitalists in biology argued that a special "life force" distinguished living creatures from dead ones. Today, biologists view life as a systems phenomenon, arising from the complex interaction of millions of chemical and molecular elements. Theoretically, therefore, life could arise in other than carbon-based materials. Recently, a new field, called "Artificial Life (AL)" has been gaining momentum, in which artificial physics -- including chemical, cellular, genetic and evolutionary environments -- are being modelled with computers (Langton 1988). The following question then arises: "Could AL systems ever 'really' be alive?" This question is analogous to "Could AI systems ever 'really' be intelligent or intentional?"

Suppose the firing of a neuron is tightly coupled to a neuron's metabolism. Suppose that this metabolism is the result of how chains of molecules twist and fold in upon one another, mediated by numerous enzymes. Now let us imagine that we replace each molecule with a binary representation of all aspects of its 3-dimensional topology and chemistry. These binary representations will be enormously large, in comparison to the size of the actual molecular chain, but they will still be finite in size. That is, there will be a systematic correspondence between real molecular chains and these binary representations. Suppose we now also build a program that simulates all of the laws of molecular-chemical interaction. This program may be computationally extremely expensive, but again, there will be a systematic correspondence: between manipulations of binary representations by the program and the real twists and turns of interacting chains of real molecules. Such a simulation would allow one to discover how molecules interact without having to use actual molecules. If one defines Life as a systems phenomenon, then AL systems are "really" alive, in that they involve the same level of causal relatedness among correspondingly complex representations, even though the materials of which these representations consist are completely different in substance. As Langton (1989b) states:

Life is a property of form, not matter, a result of the organization of matter rather than something that inheres in the matter itself. Neither nucleotides nor amino acids nor any other carbon-chain molecule is alive -- yet put them together in the right way, and the dynamic behavior that emerges out of their interactions is what we call life. (p. 41)

At this point, terminology and assumptions become all important. If we define the term "alive" to mean only what systems of actual molecules do, then AL systems are clearly not "alive" in this sense, since we have already precluded the ability to apply the term in this way. If, however, we define "alive" to mean physical patterns of interaction that are systematic in certain ways (e.g. can be placed in causal correspondence and exhibit the same levels of complexity), then the term "alive" will apply to such AL systems.

It is interesting to note that the arguments that are beginning to emerge within the AL community, concerning whether or not such systems will ever ‘really’ realize Life, are following a parallel line of development in relation to arguments concerning whether or not AI systems will every ‘really’ realize intelligence, consciousness, or "intentionality". As one might expect, some of the initial arguments against AL systems ever realizing ‘true’ Life rely heavily on Searle’s and Harnad’s arguments, e.g. see (Pattee 1988).

4.2. The "Unsimulatable World" Argument (HA-4)

What if each neuron's behavior is intimately effected by what is happening down to the quark level, i.e. to the bottom-most level of the physics of reality? There must be some point at which computationalists give up. Assume for example that a computer can simulate physics down to the quantum level. A simulation of gold in the computer will still not have the value of real gold. Since the real brain exists in the real world, and since the real world is unsimulatable, therefore the real brain cannot be replaced by a computer simulation.

The "Nonclassical Measurement" Reply: Recently, Fields (1989) has argued that, if one gives up computationalism, then one must abandon nonclassical (i.e. quantum) physics. Continuous measurement to an infinite level of precision is not allowed in nonclassical physics, due to the effect that measurement has on the phenomena being observed. Once continuous functions of infinite precision are no longer allowed, then any arbitrary level of finite precision can be simulated by a Turing Machine. Thus, reality itself can be simulated. Now, it is true that one will still value real gold much more highly than simulated gold (even if one can simulate accurately the melting, gravitational pull, decay and other (sub)atomic properties of the simulated gold), but that is because social convention determines that only the real gold is an acceptable object for currency. If one imagines that simulated gold were chosen as a form of currency, then perhaps those with access to the most complex hardware would be the richest individuals in this hypothetical society. That is, social convention can only determine the social status of an object, not its other properties; otherwise, we could determine whether or not machines ‘really’ think by taking a vote.

While it may be difficult to imagine reality itself being simulated by a Turing Machine (TM), with its state transitions and paper tape, it is not quite as difficult to imagine reality being simulated by some massively parallel computer (MPC). For example, suppose future physicists complete the development of a "grand unified theory" of reality, where reality is composed of miniscule, space/time units and where different forms of matter/energy consist of different configurations of state values in miniscule space/time units and their interactions with neighboring space/time units. Then such a reality (even with chaos effects, non-locality properties, etc.) could be simulated on an MPC in an isomorphic relation to reality, as long as a systematic correspondence were maintained between MPC processors and one or more space/time units and their physical laws of interaction. It might take an MPC with, say, 1016 very fast processors an entire year of computation to simulate (for just a few time units) a tiny segment of this "artificial reality", but the simulation would maintain complete fidelity, in the sense that the results of any "artificial reality" experiments would conform completely with their corresponding direct, physical experiments. This hypothetical MPC is ultimately simulatable on a Turing Machine because each element/process of the MPC itself can be placed in correspondence with a state (or set of states) of the TM’s tape symbols and state transitions.

This approach is reasonable only if reality is not continuous (i.e. infinite) at every point of space/time. As Fields argues, the non-classical (i.e. non-continuous) paradigm within physics has become more and more accepted within this century and recent results within physics indicate that this paradigm will remain robust for the forseeable future. For a more detailed discussion of the "nonclassical measurement" reply, see Fields (1989).

5. Conclusions

The working hypothesis of AI, Connectionism, and Cognitive Science in general is that Mind (i.e. intelligence, intentionality, consciousness, perception of pain, etc.) arises from the way in which simple processing elements, each of which lacks Mind, can be organized in such a way as to exhibit Mind.

The extent to which cognitive scientists can build a convincing system with intentional behavior is ultimately an empirical and political issue, rather than philosophical one. That is, once systems exhibit intentional behavior, either they will convince humans to accept them as being intentional, or they will have to fight humans for their civil rights.

Postulating a special "intentionality" in people -- that in principle cannot exist in computing machinery -- is similar to reactions in the past by others unhappy with theories that reduce mankind's sense of self importance. For example: (a) The Copernican revolution replaced man from being at the center of the universe. (b) The Darwinian revolution removed man from holding a special biological position in the animal kingdom. (c) Genetic theory and molecular biochemistry eliminated the notion of "vitalism" as a special substance separating living systems from dead matter. The notion of "intentionality" as residing in the substances of the brain is now serving the same purpose as that served by "vitalism" for 19th century scientists alarmed by reduction of life to the 'mere' organization of matter.

Before we abandon computationalism and the systems assumption for what appears to be a form of "cognitive vitalism", we need truly convincing arguments (i.e. that non-biological machines lack intentionality). .Until then, if a computational entity acts like it knows what it is talking about, then we should treat it as such. This kind of a pragmatic approach will keep us from mistreating all alien forms of intelligence, whether they are the result of evolution (or of artificial intelligence and neural network research) on our own planet or within other distant solar systems.

References

Alvarado, S. J. (1989) Understanding Editorial Text: A Computer Model of Argument Comprehension. Ph.D. Dissertation, Computer Science Dept. UCLA.

Dyer, M. G. (1983) In-Depth Understanding. MIT Press, Cambridge MA.

Dyer, M. G. (1988) The promise and problems of connectionism. Behavioral and Brain Sciences. 11:1, 32.-33

Dyer, M. G. (in press) Symbolic NeuroEngineering for Natural Language Processing: a Multilevel Research Approach. In J. Barnden and J. Pollack (Eds.), Advances in Connectionist and Neural Computation Theory. Ablex Publ.

Elman, J. L. (1988) Finding structure in time. Technical report 8801. Center for research in language, UCSD, San Diego.

Fields, C. (1989) Consequences of nonclassical measurement for the algorithmic description of continuous dynamical systems. Journal of Experimental and Theoretical Artificial Intelligence. 1(1), 171-178.

Graubard, S. R. (ed.) (1988) The Artificial Intelligence Debate. MIT Press, Cambridge MA.

Harnad, S. (1989a) Minds, machines and Searle. Journal of Experimental and Theoretical Artificial Intelligence. 1(1), 5-25.

Harnad, S. (1989b) Personal communication.

Langton, C. (Ed.), (1988). Artificial Life. Addison-Wesley Publ. Co. Reading, MA.

Lehnert, W. (1978) The Process of Question Answering. Lawrence Erlbaum Assoc. Hillsdale, NJ.

Minsky, M. (1985) A framework for representing knowledge. In R. J. Brachman and H. J. Levesque. Readings in Knowledge Representation. Los Altos, CA: Morgan Kaufmann.

Minsky, M. (1986) The Society of Mind. Simon and Schuster, NY.

Newell, A. (1980). Physical symbol systems. Cognitive Science. 4 (2), 135-183.

Pinker, S. and J. Mehler (1988) Connections and Symbols. MIT Press, Cambridge MA.

Pattee, H. H. (1988). Simulations, Realizations, and Theories of Life. In Langton, C. (Ed.), Artificial Life. Addison-Wesley Publ. Co. Reading, MA. pp. 63-77.

Searle, J. R. (1980a) Minds, brains and programs. Behavioral and Brain Sciences, 3(3), 417-424.

Searle, J. R. (1980b) Intrinsic intentionality. Behavioral and Brain Sciences 3(3), 450-457.

Searle, J. R. (1982a) The Chinese room revisited. Behavioral and Brain Sciences 5(2),345-348.

Searle, J. R. (1982b) The myth of the computer. New York Review of Books, 29(7), 3-7.

Searle, J. R. (1982c) The myth of the computer: An exchange. New York Review of Books, 29(11), 56-57.

Searle, J. R. (1985a) Patterns, symbols and understanding. Behavioral and Brain Sciences, 8: 742-743.

Searle, J. R. (1985b) Minds, Brains and Science (Cambridge MA: Harvard University Press).

Searle, J. R. (1989). "Does Cognitive Science Rest on a Mistake?". Presentation to the Cognitive Science Research Program, UCLA, Nov. 17, 1988.

Smolensky, P. (1988) On the proper treatment of connectionism. Behavioral and Brain Sciences, 11(1),1-23.

Schank, R. C. and R. P. Abelson (1977) Scripts, Plans, Goals, and Understanding. Lawrence Erlbaum Assoc., Hillsdale, NJ.

Turing, A. M. (1964) Computing machinery and intelligence. In: A. R. Anderson (ed.), Minds and Machines, (Engelwood Cliffs, NJ: Prentice Hall).

Wilensky, R. (1983) Planning and Understanding. Addison-Wesley Publ. Co. Reading, M.