Finding Lost Minds

Author’s Response to Stevan Harnad’s

"Lost in the Hermeneutic Hall of Mirrors"

Michael G. Dyer

Computer Science Department

UCLA, Los Angeles, CA 90024

 

 

Harnad claims that I am lost in a "Hermeneutic Hall of Mirrors". I argue that Harnad is himself lost, in various ways detailed below.

Lost in the "Other Minds" Problem

Harnad (1990a,b) accepts Searle’s Chinese Room argument as definitive. I do not -- for arguments I have already given. But another way of explaining Searle’s inability to experience the understanding of Chinese is in terms of the "Other Minds" problem. Harnad believes that Searle has a "periscope" that "provides a privileged peek on the other side of the other-minds barrier in this one special case". I hold the opposite belief; namely, that what Searle’s (1980) thought experiments really show us is that X cannot experience being Y’s mind (with its Chinese personality, knowledge and Chinese comprehension skills) as a result of imagining doing everything that Y’s brain/mind does. If X could successfully achieve this experience (by any means at all!), then X would lose X’s own experience of being X because X would now be (a replica of) the mind of Y. Thus, X’s mind would be replaced by Y, so there would be no X at this point to query and ask "What’s it like to be Y?".

In contrast, if X fails to achieve the experience of being Y’s mind, then X retains the experience of being X’s own mind -- a mind who is experiencing being kept busy performing the operations involved in creating Y’s mind. In this case, we cannot ask the X-I/O-channel what it’s like to be X any more than we can ask any of Y’s individual neurons what its like to be Y. If we ask the Y-I/O-channel then we will get answers from the replica of Y’s mind, which, if it is a completely faithful replica, will give us the same answers (in Chinese) that the original Y would give us. It does not matter that the interpreter system performing the computations is itself intelligent (that X can understand English). That fact is just a red herring (i.e. the computations could be performed just as well by computer execution circuitry).

What Searle has presented us with is just a nice variant of the "Other Minds" problem. Y is another mind from X. X cannot experience being anyone else other than X, no matter how much X performs the operations involved in being Y. If X succeeds in becoming Y, then X is gone and Y is now there. We cannot ask Y what it’s like to be X either (even though X is sustaining Y’s mind by performing all of the necessary computations). We will never know if this X-created Y-mind replica is really having the same subjective experience -- i.e. of being Y -- as the original Y (whose own mind, incidentally is being maintained by the "slaving away" of Y’s own neurons, neurotransmitters, etc.). Again, we have come up against the impenetrability of "Other Minds".

The "Other Minds" problem is even worse that indicated above. In fact, I cannot even experience being myself at an earlier point in time! Many of us have had the experience of reading something we wrote in the past and having that text seem as if it were written by a stranger. Try to remember what it was like to be ourselves 10 years ago is hopeless. But even experiencing what it’s like to be oneself even a few minutes ago is impossible, because one only retains memory traces of each past time Tp and not the entire brain-state at Tp. So one is only recalling one’s past selves, not becoming them. And only by becoming that past self can one experience being that past self. If we were to succeed, then our brain-state of Tnow, with its more recent memories/skills etc., could not be allowed, because such a current brain-state would get in the way of allowing us to experience being that past self.

If we accept that experiencing being another mind is impossible, then what are we to conclude when Searle states that he tried his "gosh-darn almightiest" to experience being another mind (i.e. really understanding Chinese as Y does) and failed? Are we to conclude, because Searle failed, that the other mind does not exist? Such a conclusion only makes sense if we believe that one mind can experience what it’s like to be another mind.

Lost on the Ground: Sensing but not Thinking

Harnad claims that the scenario created by the Strong AI position -- i. e. that one personality could perform computations to create another (i.e. "split personalities) -- is "very unlikely" -- even though this kind of "distinct levels" situation occurs commonly in modern computational systems. Harnad’s own grounding ‘solution’ involves making physical sensors (i.e. energy transducers) more central to intentionality than the symbol-structure manipulations of traditional AI. Unfortunately, Harnad’s attempt at ‘fixing’ Searle’s ‘devastating blow’ against Strong AI leads, in my opinion, to an even more unlikely situation, which has been detailed by Joslin (1990), who led Harnad into the following reducto ad absurdum:

1. Harnad agreed that Searle’s Chinese Room arguments prove that a symbol manipulation system cannot "really" understand Chinese.

2. Harnad claimed that a robot passing the Total Turing Test (TTT) would "really" understand Chinese.

3. Harnad agreed that, if a symbol manipulation system could be hooked to some simple sensory device (such a video camera with digital output) and passed the TTT, then such a composite system would "really" understand Chinese.

At this point, Joslin asked Searle if removal of the video camera would suddenly cause the remaining system to no longer really understand Chinese! Searle’s response was:

... It seems very unlikely that just cutting off the transducers should turn off the mental lights, but the hypothesis was counterfactual in any case, so let the conclusion stand. ... (Harnad 1990c)

As we can see, Harnad either has to (a) accept sudden loss of intentionality due to sensory loss or (b) back out of premise 3 above and deny that a symbol manipulation system -- with simple sensory devices on the front end -- could ever pass the TTT. Position (a) seems prima-facie absurd. What about position (b)?

Harnad (1990a) has proposed using connectionist networks to realize "symbol grounding". Such networks are capable of extracting statistical information from sensory devices and forming what Harnad calls "iconic representations" that can serve as "a grounded set of elementary symbols" for use in building up the rest of the symbol system by "symbol composition alone". At my own lab we have constructed connectionist natural language processing systems that form word representations we term "distributed semantic representations" (DSRs). These DSRs are "grounded" in a task environment; that is, words with similar uses in the task environment end up developing similar-looking patterns in their representations. At the same time, these DSRs encode compositionality and can be accessed by distinct connectionist modules (Dyer 1990a,b,c) (Dyer et al. 1990) (Lee et al. 1990) (Miikkulainen and Dyer 1989). Whether or not these models are along the track that Harnad has in mind is not as relevant as the fact that all current connectionist models, including the proposal sketched by Harnad (1990a), appear to be simulatable on standard "symbol crunching" computers. Consequently, we should be able to construct a complex connectionist/symbol-processing model that is hooked to relatively simply sensors (such as a video camera). If Harnad rejects position (b) above, then he must, at the very least, abandon his own proposal for grounding symbols.

Harnad (1990a) insists on building intentional systems "from the ground up". But the history of AI has shown that both bottom-up and top-down approaches are needed. Early connectionist research was stymied because it lacked top-down theories of the knowledge schemas and dynamic inferencing needed for high-level cognition.. Recently, areas of AI research have been stymied because of too much concentration on high-level planning and not enough on low-level sensory integration. But without top-down research, e.g. natural language processing (NLP) and computational linguistics (CL) theory, neuroscientists wouldn’t know what kinds of high-level phenomena to look for in brain tissue. One the other hand, without a solid basis in neuroscience, NLP/CL systems will remain forever brittle.

So I accept the need for symbol grounding (i.e. representations that related sensory information to more abstract conceptualizations). But, just as Searle is overly restrictive on the allowed forms of physicality to produce thought, Harnad is overly restrictive on the physicality of sensors to produce grounding.

Lost in the Simulation: Artificial Realities

Harnad (1990b) states that "simulated environments can only produce "simulated grounding". Recently, there has been much popular press concerning the construction of "artificial realities". A subject wears a special helmet and a glove (in some cases, entire body suits are being developed). TV cameras in front of the eyes project simulated images to create a simulated environment. As the subject moves with the helmet, a computer updates the camera images, thus giving the subject the illusion of looking around at the simulated environment. With the glove and bodysuit the subject can walk through the simulated environment and "touch" simulated objects (some gloves vibrate to give a simplified sense of touch).

Let us now suppose that we build a computer system that, without being an actual robot, is given these images and vibratory sequences as input. Output from the robot causes the input sequences to be altered appropriately. Just like a human subject, the computer could thus learn to "wander" in the simulated "world". According to Harnad, this computer would not be passing the TTT because there are no physical sensors/effectors (only simulated ones).

Once the computer learns how to navigate and interpret this artificial reality, we now have it control a physical robot, one wearing an actual helmet, glove and body suit. Does the computer-controlled robot now suddenly gain "intentionality"? Finally, we reveal that the "artificial reality" consisted of images from a real environment. We can now replace the special helmet with just a normal video camera (without changing the computer’s programming). Does the robot only now gain intentionality? At what point has it passed a ‘real’ TTT and not just a ‘simulated’ TTT?

The Strong AI position here is that the original computer was already grounded with respect to its environment. If that simulated environment turns out to be as complex as our own, then it doesn’t matter that the original computer’s algorithm was receiving simulated sensory data. That is, the information produced by the sensors is what counts; not the sensors themselves. Harnad’s position on grounding is too restrictive. It is all well and good that the representations built by the robot are formed as the result of interactions with sensory information, but Harnad requires that this sensory information must come from physical sensors. This restriction leads Harnad into having to abandon intentionality the moment the sensors are removed.

Lost at the Top: Symbol Processing and Intrinsic Meaning

Harnad (1990a), like many, uses the terms "symbol" and "symbol processing" in several ways. One moment he admits that symbolic systems can simulate connectionist systems; the next moment he refers to connectionist systems as "nonsymbolic". I myself have been guilty of this terminological sloppiness but the fact is that the "non-symbolic versus symbolic" debate between AI-ers and connectionists is not at the level of Turing equivalence but at the level of what are the best representations for symbols and the best ways to manipulate them. For example, we want symbol representations that can be automatically formed from sensory experience; rather than being arbitrarily prespecified as ASCII strings by computer engineers (Dyer 1990a,b,c).

Harnad sees no capacity for meaning in a "symbol cruncher (SC)" because it just manipulates "symbols (... scratches of papers, holes on a tape, or states of flip-flops in a machine) purely on the basis of their shapes (not on the basis of their interpretations)". I would like to know what these "interpretations" are that the SC must manipulate before it can be said to be "intentional". I assume here that by "interpretations" Harnad means "meanings" or "intentions". It seems to me that Harnad is requiring that these "intentions" or "meanings" already be resident within the system before the system can be said to be intentional. For non-intentional systems this position becomes more clearly absurd -- e.g. like requiring that "sorted-ness" already be manipulated within a sort program before the system’s performance can be ascribed as performing "real" sorting. Harnad (like Searle) requires "intrinsic meaning" before he will ascribe intentionality. On the contrary, a great advance in cognitive science has been the realization that one can build a system out of non-intentional parts whose total interactions will produce an overall behavior that is intentional. If the parts had to have intentionality to begin with, we could never get the enterprise off the ground.

How does a computer determine the "shape" of a symbol? The computer makes use of physical processes to discriminate shapes by examining the physical state configurations that make up each shape. But what does a neuron do? It also makes use of physical processes to discriminate physical state configurations. I think that Harnad forgets that computers are as physical as neurons (but in different ways -- more about this in the last section). Once the algorithm is loaded in the memory and the machine turned on, there are no symbols. We can talk about the machine’s performance in terms of symbols but the machine is actually opening/closing gates, shunting electrons around, etc. Searle and Harnad hope that neurons do something "magical" which cannot ever be placed into correspondence with physical manipulations of physical configurations within distinct materials (i.e. "holes on a tape").

There are two questions involved here. First, can intentionality arise in material substances and organizations as different from biology as are computers? Second, if computer can be intentional, will their intentionality be made any different, due to the fact that the underlying material is so different?

The Embodied Algorithms Correspondence Problem

Harnad, Searle, and others, e.g. (Penrose 1989), are concerned with the role that physicality plays in creating Mind. The Dualists claim that Mind is not Matter; the Materialists -- that Mind is only Matter. The Functionalists (Fodor 1981) claim that, while Matter is needed to realize Mind, all that counts is the organization of Matter. The Physicalists feel that, although Mind arises from the organization of Matter, different forms of Matter must have some influence on Mind -- i.e. that "Matter matters" in making Minds.

The problem with the Physicalists is that, while they are right that "Matter matters" at some level, they go too far -- concluding that certain forms of matter (e.g. computers) can never have Mind or that certain physical sensors are always needed before Mind can come into being. So, in what way does "Matter matter"?

Until an algorithm is embodied on a physical computing machine it remains merely an idealized abstraction. That is, every algorithm, until it is embodied in some machine, is underspecified. This underspecification results from the fact that the algorithm never includes a specification of the machine on which it will execute. This description is never included because the description alone of a machine will not suddenly bring that machine into being, nor will it allow the algorithm to be embodied -- only the actual, physical machine can realize the embodiment. Each type of machine contains a distinct causal/physical architecture -- one which leads it to pass through state changes in real time, under the laws of physics. The algorithmic level is selected because it can be developed and described independent of any particular machine, and with suitable mappings (produced by compilers) be run on different machine architectures.

Contrary to Searle’s accusation -- that AI is Dualist -- I do not think that AI researchers would grant Mind to an unembodied algorithm (I certainly do not!). So it is not the abstract, idealized algorithm that is intelligent, comprehending, intentional, etc.; it is the physical, executing embodied algorithm that is intelligent, intentional and so on. Now, to embody an algorithm means to create a physical machine (by direct wiring, firmware, software, etc.) that performs "in accordance" with the abstract algorithm. At this point, a major problem enters, which I call the "Embodied Algorithms Correspondence (EAC) Problem ". But first, let us briefly digress and consider how the term "algorithm" is actually used in computer science.

At the I/O level, an algorithm implements a mapping, a function. But at a more detailed level of organization, an algorithm specifies the computational processes used to realize that function. To AI researchers who design process models of cognitive tasks, differences in the way in which systems process information, to perform the task, are important, even if the overall I/O behavior is the same. There are usually an infinite number of ways to realize a given function -- each with different complexity measures, memory usage, and so on. For example, the bubble-sort and quick-sort algorithms both achieve the same functional mapping, but the processes are very different We can generate trivially an infinite number of bubble-sort variants, simply by adding ‘dummy’ instructions -- ones that perform additional, useless computations, but do not alter the overall function. To make matters more complicated, the same abstract algorithm (e.g. bubble-sort) can be specified in different higher-level languages (e.g. FORTRAN versus Pascal), which leads to differences in the compiled algorithms. A given algorithm, specified in a higher-level language, can be compiled in many different ways, at intermediate (assembly code) and machine-language levels. For example, one assembler/loader may load the object code into a different segment of memory than another. Finally, the hardware of two different machines, even with compilers/assemblers producing identical binary object code, will embody the same higher-level algorithm in a completely different circuit diagram, resulting in completely different flip-flop/gate state changes. Since it is the executing, embodied algorithm that is going to be "intelligent" etc., two identical high-level algorithms, A1 and A2, when executing on two different machines, are actually realizing two embodied algorithms, e-A1 and e-A2, that are not identical at the levels below the abstract level.

So an idealized algorithm is always specified at some higher level of abstraction than the actual machine on which it is to execute, while its embodiment is always realized in the physically changing machine, which operates at the lowest levels of specific physical state changes (ultimately, the quantum level). To make matters worse, no two machines can ever be absolutely physically identical. Even two model M1 workstations (with similar memory, cpu configurations, programming, etc.) are physically different, to the extent that one may have been manufactured a week later; to the extent that each sits in a different physical location and thus gets bombarded by different cosmic rays at different angles; the heat in two locations varies; each is plugged into a different section of a power grid that receives slightly different power fluctuations, and so on. As a result of these physical differences between identically organized machines, if we continuously run the same algorithm A on both machines (with same object code, even in the same locations in memory, starting at the same moment) over a long enough period of time, one of the machines will experience a hardware failure that the other machine will not. As a result, the I/O of the two embodiments of A will be different -- leading to the result that identical algorithms for identical functions on ‘identical’ machines are not (ultimately) even functionally equivalent!

If every embodied algorithm is functionally nonequivalent, then how can anyone ever take the Strong AI position -- i.e. that a suitable algorithm, running on a wide variety of machines -- will embody a similar Mind. Again, the answer has to do with how the terms "suitable algorithm" and "similar Mind" are interpreted.

Let us assume that a marvelously complex algorithm (MCA) when executed, is able to pass the TTT. This algorithm may be connectionist or symbolic or a hybrid or something else -- none of these details matter. Now, we embody MCA on three machines: SM/333 (serial number #333), SM/334 (serial number #334) and MPP. SM/333 and SM/334 are identical models and have incredibly fast single-processor architectures. The MPP is a massively parallel machine with many slower processors. While executing MCA, each computer controls a distinct robot body, R/333, R/334 and R-MPP. Now, we want to compare the postulated Minds of each machine/robot.

If, as Searle, you want to deny these robots their Minds, then imagine that each model consists of biological components, grown in a vat, where the "software" is "loaded" by carefully altering the synapses directly. (Remember, loading software in a standard computer also requires directly modifying something physical in its memory.) The point here is, that we are trying to compare the Minds of entities with algorithm/hardware configurations that are similar/different in material/organization.

Well, there are some major difficulties to overcome before comparisons can be made. One problem is that, the moment that the robots are turned on, by virtue of each being in a distinct location, each will receive different sensory inputs and their Minds will immediately start diverging. A second problem is that each computer will experience different sorts of hardware failures and, due to both differences in the location of failures and the organization of the hardware, the robots will exhibit different forms of I/O behavior.

But if we are going to attempt to compare Mind-like capacities (which is essential, in the long run, for a cognitive science) then which pair of robots would be more similar? At first, if all are behaving in similar ways, one could argue that they contain similar Minds (modulo their continuing divergence). One could also try to argue that, since R/333 and R/334 are identical robot models, they would be more similar in Mind than R-MPP, whose hardware organization is distinct. What would be the basis for such an argument? The argument would be that, since an MPP is physically different in organization than an SM, its lowest-level embodied algorithm is going to be physically very distinct (i.e. each little MPP processor is doing different things -- executing a different, lower-level algorithm than the super fast single processor of the M1).

But then, if the embodied algorithms are distinct, there is no conflict with the Strong AI position, since this position is:

(1) similar AI algorithms produce similar Minds, and

(2) these AI algorithms must be embodied in some machine to realize Mind,

and the embodied algorithms are agreed here to be radically different at the lower levels. In fact, the differences in architectures of the SM and MPP models will lead to predictable classes of differences in the kinds of fault tolerance and/or buggy behavior the robots will ultimately exhibit.

But, as long as faults do not occur, one could say that the SM-model and MPP-model robots are experiencing a "similar Mind" at the level of abstraction specified by the algorithm MCA.

Since Minds can never be absolutely identical, the issue must become one of being able to characterize how similar Minds will be, along different dimensions and at different levels of abstraction. The real issue, then, is:

Which pairs of Minds are more similar?

(1) Two completely different algorithms CA1 and CA2, running on identical machines, or

(2) An identical abstract algorithm CA (i.e. identical at some level of abstraction) running as distinct embodied algorithms, on two distinct machine architectures.

A Functionalist concerned with process modeling works from the top down, accepting two Minds as more and more similar as their algorithms continue to match, as one drops from higher levels of abstraction to the lower levels of embodiment. As a result, two Minds will be said to be more or less "similar" based on how low a level of abstraction one can drop to until their processes for realizing cognitive phenomena diverge. The justification for this approach is that it allows one to capture a wide range of similar behaviors across different types of machines.

A Physicalist, however, appears to work from the bottom up, accepting two Minds as similar only as long as their lowest levels match -- but immediately we have a problem. If the lowest levels of material and organization are the same, then all the higher levels will be the same automatically. But this is trivially true and accepted by everyone, and does not allow us to compare any two machines that are not already identical. When a Physicalist starts from the bottom up, he postulates that the material of the machine so overrides the organization that machines without similar materials cannot even be compared, no matter how much their organizations match at higher levels of abstraction. How much organization the Physicalist allows in -- before rejecting comparison on the basis of material substance -- determines how strong a Physicalist one is. Searle cannot grant Mind to any kind of general purpose computer, since its material (e.g. silicon versus neurons) counts so much more than organization. In contrast, Harnad accepts organization at higher levels of abstraction for Mind, but insists on special materials (i.e. transducers) for a portion of the organization (i.e. the sensory grounding part). Thus Harnad abandons intentionality if the sensory devices are not of a certain physical (i.e. energy transduction) sort.

Computer scientists have always accepted that different architectures are different (in speed, fault tolerance, etc.) but continue to compare performance in terms of algorithms, since algorithms allow one to specify how similar/different organizations and how their corresponding dynamical processes behave across distinct architectures. Since every mind will be unique and since other minds are impenetrable, what counts is a language for comparing the levels at which minds will be similar in information processing.

Recall the Gothic church example (Dyer 1990c, section 2.4). Two Gothic churches (one of clay bricks; one of ceramic bricks) are more similar than a skyscraper and Gothic church that happen to be made out bricks of the same material. In spite of the fact that the "Gothic-ness" of a church depends on the organization of the bricks (not their material), a Gothic church made out of ceramic bricks will react differently to, say, an earthquake, than a Gothic church made out of clay bricks. If we define "Gothic-ness" at the style level, the material does not matter. If, however, we include all possible responses in all environments, then material differences will exhibit effects. My position is that Mind arises due to style (organization) and so can be constructed from different building blocks. But differences in Matter will have effects, resulting in each Mind being unique in some way.

Teleportation Revisited

Harnad (1990b) states that the Strong AI position allows one’s (i.e. unique) intentioniality to be teleported -- well, yes and no -- it depends on how similar the embodied algorithms remain after teleportation. Suppose you are going to be teleported to one of two planets (P1 or P2). Planet P1 has an earth-like environment; thus, the teleporter analyzes your atomic structure and disintegrates you. It sends a beam of light to P1 where a reconstructor reads the information in the beam and rebuilds you, atom for atom. If you believe in the Strong AI position, then your interpretation is that your original Mind has been destroyed and an equivalent replica made on P1. Due to quantum effects, the replica won’t be absolutely identical, but one’s atoms are being recycled continuously anyway, even when standing still. Most holders of the Strong AI position should agree to use such a [teleporter + reconstructor] system.

Planet P2 is a very inhospitable planet. Consequently, the reconstructor on P2 cannot reconstruct you atom for atom. Instead, an incredibly complex robot (ICR) has been built, than can survive in the environment. The information transmitted by the beam of light causes the reconstructor to "down load" a software program whose organization is isomorphic to the organization of whatever causal/physical processes are known (by future cognitive scientists) to be involved in creating one’s Mind. When teleported back, the reconstructor on Earth performs an ICR-->human-form transformation, modifying each neuron, synapse, dendrite, etc. during reconstruction so that the experience of having visited P2 is retained. Would a Strong AI believer agree to be teleported to planet P2?

If the robot’s brain and sensors/effectors are complex/isomorphic enough, I would not be afraid that I’m being teleported into a "meaningless symbol cruncher", but rather that the lowest (embodiment) level of my Mind might be sufficiently different in the robot embodiment that the unique part of my subjective experience would not arise in the robot. That is, I accept that the ICR would be intentional (since I accept that the premise of computationalism -- that thinking is computing), but, due to its different material/organization, the ICR-embodied mind-algorithm might be sufficiently different than my normal brain-embodied mind-algorithm to fail to sufficiently correspond to my unique intentionality (again, the EAC problem). In addition, the effect of the environment on this different ICR material/organization might cause my transmitted intentionality to rapidly diverge into some sort of an alien form of intentionality.

Notice, before one can create an ICR-->human-form reconstructor, one must have solved the EAC problem, at least with respect to these two distinct forms of hardware/wetware. If solved in this case, one consequence would be that a human H need not even travel to planet P2. Instead, a replica of H’s mind could be beamed to P2 and the H-embodied ICR could experience P2 for several weeks. Then the resulting mind could be beamed back to Earth. At this point, H would agree to be disintegrated and reconstructed with the ICR experiences integrated into H. Thus, H could acquire new experiences without losing any time on Earth.

As a Computationalist, I believe that intentionality arises due to the computations realized by the organization of different forms of Matter. But since we do not know why or how Organization (algorithm) and Matter (hardware) combine to produce the special subjective nature of Mind, the nature of the Mind’s subjectivity may be subtly altered as the matter is altered (even though the organization remains the same at higher levels of abstraction). An example of how this alteration can happen is through distinct hardware faults arising in distinct kinds of materials. So Matter does matter to me, but not to the extent championed by Searle and Harnad. For me, the extent to which different forms of matter and organization lead to similar minds (the EAC problem) is an extremely difficult problem. Complexity and fault-tolerance analysis of algorithms and hardware architectures is only an indication of the barest, incipient steps toward addressing this problem.

Conclusion

Generalizing from introspection, I have arrived at the belief that the physical execution of systems with certain (as yet unknown) organizations of Matter can give rise to subjective experiences within those systems. What the subjective experiences (if any) are for numerous cases -- e. g. insect, thermostat, bat, parallel computer running AI/connectionist software, dog, ape, visiting alien, or another person -- I do not know; neither do I know why the particularly subjective nature of a subjective experiences arises. While I do not want to grant civil rights to any elaborate deception, nor do I want to deny civil rights to any intentionality similar to my own.

Physicalists cannot prove that machines cannot ultimately have "real" intentionality any more than Functionalists can prove that they can. If, however, we accept that the organization of some kinds of matter (e.g. ourselves) can bring about Mind, then, in the face of the "Other Minds" problem, how can we restrict Mind ahead of time to only certain types of Matter? The various physicalist positions seem chauvinistic, to say the least. I have argued here that the algorithmic approach of computationalism, tempered by an awareness of the "Embodied Algorithms Correspondence" problem, is better suited for recognizing and comparing the possible Minds of physical systems (whether electronic, biological, photo-optical, etc.) than the overly restrictive approach of strong physicalism.

References

Dyer, M. G. (1990a) Connectionism versus Symbolism in High-Level Cognition. In T. Horgan and J. Tienson (eds.). Connectionism and the Philosophy of Mind. Kluwer Academic Publishers, Boston MA. (in press).

Dyer, M. G. (1990b) Symbolic NeuroEngineering for Natural Language Processing: A Multilevel Research Approach. To appear in: J. Barnden and J. Pollack (Eds.) Advances in Connectionist and Neural Computation Theory 1. Ablex Publ. Vol. 1. 1990.

Dyer, M. G. (1990c). Distributed Symbol Formation and Processing in Connectionist Networks. Journal of Experimental and Theoretical Artificial Intelligence (in press).

Dyer, M. G., Flowers, M. and Y. A. Wang. (1990). Distributed Symbol Discovery through Symbol Recirculation: Toward Natural Language Processing in Distributed Connectionist Networks. In R. Reilly and N. Sharkey (Eds.). Connectionist Approaches to Natural Language Understanding. Lawrence Erlbaum Assoc. Publ. (in press).

Fodor, Jerry A. (1981). The Mind-Body Problem. Scientific American. 244 (1), 114-123.

Harnad, S. (1990a). The Symbol Grounding Problem. Physica D, in press.

Harnad, S. (1990b). Lost in the Hermeneutic Hall of Mirrors. Journal of Experimental and Theoretical Artificial Intelligence, in press.

Harnad, S. (1990c). Communication between Harnad and Joslin, electronic mail group posting, February.

Joslin, David (1990). Communication between Joslin and Harnad, electronic mail group posting, February.

Lee, G., Flowers M. and M. G. Dyer. (1990). Learning Distributed Representations for Conceptual Knowledge and their Application to Script-Based Story Processing. Connection Science (in press).

Miikkulainen, R. and M. G. Dyer. (1989). A Modular Neural Network Architecture for Sequential Paraphrasing of Script-Based Stories. Proceedings of the International Joint Conference on Neural Networks (IJCNN-89). pp. II-49 - II-56, IEEE.

Penrose, R. (1989). The Emperor’s New mind: Concerning Computers, Minds, and The Laws of Physics. Oxford University Press, Oxford.

Searle, J. R. (1980). Minds, brains and programs. Behavioral and Brain Sciences, 3 (3), 417-424.