Logotipo MLAG

Find a word in this Page

 

Index » Members » Sofia Miguens » Models of Understanding

 

Models of understanding – minds and machines

 

Abstract: This article is about minds and machines, or, to be more precise, about cognitivism as a model for understanding the mind[1]. The sub-title is a quotation from Hilary Putnam ‘s famous 1960 paper Minds and Machines, a sort of manifesto for funcionalism in the philosophy of mind. Although Putnam himself came to reject funcionalism, to a large extent because he now thinks that reason cannot be naturalized, functionalism is central in what I will call cognitivism. I will be considering cognitivism in a historical, rather than argumentative, way. Basically I want to consider two aspects: (i) some ideias of authors (such as H. Putnam, J. Searle, A. Turing, H. Simon and D. Dennett) which I think may help us take a stand in an ongoing discussion concerning the relevancy of cognitivism in thinking about the nature of mind, (ii) a noteworthy consequence of cognitivism: although cognitivism uses computational machines as a model, or metaphor, for understanding the mind, this results in weakening, eventually doing away with, the natural / artificial dichotomy, which we could assume was presupposed by that model.

 

1. Putnam’s functionalism plus Fodor’s Language of Thought

 

Hilary Putnam’s formulation of functionalism, in the 60s, in papers such as Minds and Machines and The Nature of Mental States, is a historical landmark in the philosophy of mind. Putnam’s basic ideia is that mental states (contrary to what Identity Theory materialists defended) are not brain states, but rather functional states implemented in the brain (although they could be implemented in another hardware – the ideia of multiple realizabilility of mental states goes along with functionalism). In fact, it is exactly because mental states can have diverse physical realizations that they should not be identified with brain states. What Putnam suggests then, is that mental states of beings such as us, are in a similar relation to neurophysiological states as logical (functional) states of computational machines are to the physical states of those machines (`the mind is to the brain, as software to hardware'). Putnam intends, with this position, to dissolve the mind-body problem. In other words, he wants to show that the mind-body problem is not a genuine theoretical problem, but merely linguistic and logical in nature. According to Putnam, the same problem would arise for any cognitive system capable of self-monitoring and of producing self-descriptions, if there were, in such a system, as there is in us, an asymmetry between the access to the logical level, the program level, and the access to the physical level. The program level is the level of the system in respect to which the system is incorrigible  – as we can see by the status of statements such as I know I feel pain, I know I think that p. There is no such incorrigibility in the system’s access to its physical level (each of us must learn about his or her own brain – the fact that our brain causes our mind does not by itself turn a person into a neuroscientist). Putnam’s functionalism makes us look at the (supposed) mind-body problem as related to this assimetry, and not to some ‘unique nature of human subjective experience’.

Functionalism, together with the ideia according to which the functional level is an autonomous, symbolic, representacional-computational level (elaborated for instance by Jerry Fodor in his 1975 book in terms of a Language of Thought Hypothesis, which may be summarized as No representations, no computations, no computations, no mind) constitutes what I’m calling cognitivism. 

 

2. Strengths and weaknesses of the model

 

 What is wrong and what is right with this model for understanding the mind? How important is it today in the philosophy of cognitive science? How commited is the model    which makes us look at the nature of minds through the lens of computational machines    with a natural / artificial dichotomy ? The first obvious comment is that this picture of the nature of mind does not immediately call our attention to something that may seem essential to our specific kind of mind: consciousness. On the contrary, it can very easily allow for the mental character of that which has never been, and will never be, conscious. From a neo-cartesian point of view such as that of John Searle, for instance, this is almost heretic, and anyway, an utterlly unjustified position. Since the 60s much has been going on in the philosophy of mind and in cognitive science, and I will try to describe some of that history here in order to address the questions above. John Searle’s work from the 80s and 90s is very helpful here: not only does he point directly at what is lacking in the cognitivist model of the mind (namely, consciousness) but also he clearly sees the close relation between cognitivism and a specific discipline within cognitive science, ArtificiaI Intelligence, and the significance of this relation. In fact, with his celebrated Chinese Room thought experiment, Searle intends not only to contest cognitivism but also to argue in favour of the impossibility of what he calls Strong AI (Strong AI, in Searle’s own definition, is the ideia according to which AI is concerned not only with simulation of cognition but eventually with the real thing – any physical system that runs the right program will have a mind; a system’s being intelligent, or even conscious, depends only on the right kind of functional organization, and so on programming, and not on hardware). As far as that ‘condition of mentality’ may be formulated independently of the system’s physical build-up and biological origins, it is conceivable that physical systems other than humans, namely artificial systems, will be, for the exact same reasons as humans, intelligent and conscious.

But what does AI, as a discipline, have to do with cognitivism? Since the beginnings of AI, philosophers have taken an interest in it, a very natural interest, given the fact that philosophers have always been interested in the nature of thought, and in its relation with the physical world.  This interest produced extreme positions about AI, ranging from the proclamation of an impossibility in principle of non natural intelligence and consciousness, to the conviction that through AI a more general and abstract conception of the nature of intelligence would be reached, one which would allow us to see human beings and all intelligent beings    natural or non natural    as examples of one same general phenomenon.

In contemporary philosophy of the mind John Searle is a well known critic of both cognitivism and Strong AI (whereas for instance philosophers such as David Chalmers and Daniel Dennett defend both). The reason Searle’s Chinese Room thought experiment is always brought in as a reference, is that it can been seen as cooling down the excitement about cognitivism and AI. So, it is especially interesting to notice that in this thought experiment the critique of cognitivism takes the form of an intuitive test of the difference between genuine mentality and simulated, merely attributed, mentality.

Before coming back to the Chinese Room, which is, as I said, an inevitable reference for thinking about cognitivism as a model of the mind, and because I pointed out a close connection between cognitivism and Strong AI, I want to make a few remarks about the contributions of a philosopher, who has, in a way, preceded Searle, as the official philosophical critic of AI. That philosopher is Hubert Dreyfus, and some of his ideas were passed on more or less explicitly to Searle  (although Dreyfus is generally closer to European phenomenology than Searle). Dreyfus noticed early the overlap of interests of AI and philosophy and the result were some polemical works such as Alchemy And Artifical Intelligence (1965) and  What Computers Can't Do (1972). There he criticized the claims of AI, and insisted on the relation, that seemed obvious to him, between the work then being developed in AI, based on a conception of mind as a symbolic system and an ideia of intelligence as problem-solving and the rationalist and intelectualist tradition in philosophy. Dreyfus point was that AI was repeating the intelectualist errors in the conception of mind and intelligence already pointed out by philosophers such as Heidegger, Wittgenstein and Merleau-Ponty. His rejection of a conception of mind as symbolic representations of the world, and of a conception of intelligence as rule-governed problem-solving was based on the conviction that these conceptions excluded important and basic parts of the mental. Those `excluded parts’ were for instance, body movements and pattern recognition, which, according to Dreyfus, underlay the possibility of the explicit skills involved in representing and problem solving.

Dreyfus’ writings presented a sober perspective on some exaggerated forecasts that had accompanied the beggings of AI. Although some of Dreyfus’s own forecasts have been simply proved wrong (he had, for instance, the conviction that a computer could not win a chess game with a human), what Dreyfus wanted to criticize was the aprioristic assumption according to which mind consisted in symbolic representations and rule-governed manipulation of such representations. According to this intelectualist view, even a cognitive skill such as perception would ultimately consist in problem-solving by apllying rules. This conception was for instance totally oblivious of something which, in Dreyfus’s eyes, played a fundamental role in cognition: background knowledge or common sense. Background knowledge is not knowledge of facts but, in the case of people, what they know without knowing that they know it, something that they never learned but know how to act upon (as, for example, that people move more easily forwards than backwards, or that if one spills water on a towel on a table it will eventually get to the legs underneath the table).  Other characteristics of human cognition which were absent from these early attempts at simulation, and which Dreyfus pointed out, were fringe consciousness, the tolerance of ambiguity, a proper body in the world that organizes and unifies the experience of objects and subjective impressions, the capacity for boredom, fatigue and loss of motivation, intentions and interests that guide the way human subjects confront situations in the world, making it the case that not all the facts in the world are equally relevant at a given instant, and so differentiating ‘the world’, etc. Some of Dreyfus’s criticisms have been since then simply incorporated in the development of AI and Dreyfus himself was quick to admit the proximity between the principles of conexionism and the anti-intelectualist tradition in philosophy.

Let’s then move on to Searle, who is perhaps the most well-known critic of the limits of the cognitivist model of the mind. Searle likes to say that cognitive science is an exciting field of research based on a conceptual error concerning the nature of mind – cognitivism is the general name for that error and the Chinese Room thought experiment was the first attack Searle launched against it. Minds, Brains and Programs, the paper where the argument was first presented, appeared in the journal Behavioral and Brain Sciences in 1980 and the Chinese Room thought experiment hasn’t moved out of the philosopher of mind’s tool-kit since then. The Chinese Room consists of the following: somebody, who doesn’t speak any Chinese, is locked inside a room, where there are Chinese symbols in boxes. This person also has a book of instructions in English, which tells her how to combine and transform symbol sequences, in order to send them out of the room when other Chinese symbols are introduced in the room through a small window. The person inside the room knows nothing about this, but the people outside call the symbols which go in `questions' and the symbols coming out `answers'. Therefore, from the perspective of those people outside the room, verbally interacting with it, the system speaks Chinese. Thus, the system behaves intelligently, passes the ‘Turing Test ' – although the person inside knows very well she does not understand a word of Chinese. Searle claims that the Chinese Rooms makes the possibility of a system that has `attributed intentionality ' but no ` intrinsic intentionality ' or `genuine semantics' obvious.

It is not easy to say what exactly the argument is supposed to prove. In fact, as Searle admits, the Chinese Room is more of a parable than an argument. If it was presented as an argument it would go from the premisses ‘Programs are sintactic’ ‘Syntax is not sufficient for semantics’ ‘Minds have semantic content’ to the conclusions ‘Implementing a program is not sufficient for mind’ , ‘Strong AI is unjustified’. Anyway, the Chinese Room is supposed to show that mind is not a program and therefore appropriate programming could never give mind to a system, since formal properties do not constitute genuine intentionality. Searle always stresses the fact that his argument has nothing to do with a specific state of evolution of technology, but concerns only conceptual principles:  cognitivism is wrong in considering that formal properties per se would be sufficient for mind (this position would go with the defense of Strong AI). For Searle, the essence of mind is consciousness, and the existence of consciousness is a biological fact. Although in the initial formulation of the Chinese Room, Searle evokes semantics and not consciousness (`syntax is not sufficient for semantics', he says), the reason why Searle thinks one cannot speak of syntactic mental processes without speaking of semantics is the fact that he thinks that semantics and consciousness are intimately connected. Basically, Searle thinks that one cannot legitimatelly consider anything as mental if not for its relation (current or potential) with consciousness. This is what Searle calls the Connection Principle, which in fact he uses in another argument against cognitivism, formulated in the 90s (in a 1992 book called The Rediscovery of  Mind). The line of argument goes – roughly – like this: Syntax is not a physical property; Cognitivism assumes that physical events are syntactic; cognitivism is based on a fallacy. Searle calls this fallacy the homunculus fallacy – what he means is that cognitivist explanations of mental phenomena treat the brain as if there were some agent inside it, doing symbol manipulation and computations. According to Searle, symbols and computations are not intrinsic features of the world. Yet cognitivism, evoking syntactic properties to explain mental phenomena, totally overlooks this, and treats properties which are there only for an observer as if they were natural properties. But if syntax is not a feature of the natural world but an interpretation of physical events, dependent on an observer, syntactic descriptions of cognitive systems, assigned relative to observers, simply cannot do any explanatory work. And so cognitivism is flawed.

Several aspects of Searle’s ‘Critique of Cognitive Reason’, as he calls it, may be unified by noticing that for Searle, the Connection Principle is a basic principle which should be used in thinking about the mind. It states that one can conceive something which is currently unconscious as mental only as far as one can think of it as a possible content of consciousness. Only this – being potentially conscious – distinguishes intrinsic intentionality from ersatz intentionality (if not, how could we even start distinguishing a neuron from a non evoked memory, calling one mental and the other not?). So, like Descartes, although from a materialist point of view, Searle believes that consciousness is the essence of the mind. Now, for Searle, consciousness is a physical property of the brain, characterized by ontological subjectivity. Something is ontologically subjective if we cannot describe it from a 3rd person point of view  (which is what we try to do when we investigate for instance the neurophysiology of consciousness). The problem that the existence of mind in the natural world poses is connected with its ontological subjectivity. It is because of ontological subjectivity that consciousness, although a physical property of the brain, is irreducible to any other physical feature. This ontological sense of subjectivity  (the ideia according to which this world is such that there are irreducibly subjective elements in it) cannot be mistaken for an epistemological sense of subjectivity (which concerns preconceptions that are supposed to be eliminated in the pursuit of objectivity which is part of the spirit of science).  If we accept this distinction, we will have to know next how is it that we can have an objective conception of the ontologically subjective facts of consciousness. Searle’s answer is biological naturalism: the idea that consciousness is a biological feature of the brain, human and of other animals, and an emergent property (as liquidity). Searle formulates the question in these terms because he thinks that many materialists are wrong when they think that without admitting reduction (something Searle himself does not admit) one necessarily accepts dualism (and for Searle that does not have that to be the case).

Let’s go back to the Chinese Room, which, after all, opens the door to Searle’s discovery of the irreducibly subjective element of this world. Searle uses it as an instrument to criticize the artificiality of simulations of cognition and the whole critique falls on the distinction between original, genuine, and attributed intentionality. But exactly how are we supposed to distinguish between genuine intentionality and merely attributed intentionality? Searle evokes the causal power some physical systems (namely human brains) have, while others don’t, as well as the Connection Principle. But from a rhetorical point of view what Searle wants from us, is that we identify with the human being inside the room, manipulating the symbols and lacking any understanding. As is well known, the basic reply to Searle is the so called systems reply: it is wrong to attribute understanding to the executor of the program (incidentally a human). Understanding is supposed to characterize the system as a whole, and that includes the pieces of paper with the rules and symbols. By the way, it is worth considering here that our neurons don’t have any understanding of the language - Portuguese or English, or Swedish,… – that we speak, either. Yet, we wouldn’t doubt that we, the global system, do in fact understand Portuguese or English or Swedish – the lack of genuine intentionality in our neurons is no proof of a lack of genuine understanding in us. It isn’t fair, though, to take all this to mean that Searle thinks that the distinction between genuine intentionality and attributed intentionality corresponds to the distinction between ‘natural’ and ‘artificial’ – Searle is no simplistic critic of AI. He does have a problem here, though: his appeal to intuition, on which in fact the whole critique of cognitivism rests. I will now look for another approach to these same issues (cognitivism, strong AI, intentionality, the meaning of ‘natural’ and ‘artificial’) which does not share that problem.

 

3. Is there only one way intelligence can be artificial?

 

Until now I have been  been considering only philosophers, now I want to make a detour that will take us outside the discipline. Notice that the model of mind I have been considering makes it very clear how intelligence could be artificial: through the lens of the materialist and dualist conception of cognitivism, we see mind as implemented sofware, a software which can be implemented in a hardware other than the biological. That’s how intelligence could be artificial.  Now I want to bring in another ideia concerning the nature of the natural / artificial relation, an ideia coming from Herbert Simon, one of the founders of AI as a discipline, and put forward for instance in his book Sciences of the The Artificial (1969). This is an ideia that is shared by D. Dennett, who, as I said before, unlike Searle, defends cognitivism and Strong AI. Simon thinks that the sciences of the artificial by no means involve a move back, or forth – away, anyway – from to natural sciences. The artificial and the natural are not, according to Simon, two kingdoms but two points of view, which do not stand opposed to one another. Everything that is artificial (and that, to Simon, is everything that is a funcional/adaptive device, to be assessed through norms of rational functioning) is also natural (that is, it is utimately an object for physical explanation). According to H. Simon, what distinguishes the artificial, then, is something other than the existence of a distinct realm of entities. What characterizes the point of view of the artificial is the fact that it aims at systems in their status of interfaces of an interior and an exterior, thus creating the (new, in the scope of natural sciences) question of the rationality or adaptation of these interfaces to the environment. In other words, for Simon a science of the artificial is a science of the artificial because it deals with teleology, with the global behavior of systems and the purposes of that behavior. Purposes relate the interior and the exterior, independently of the material build-up of the systems. Both the interior of the system and its exterior continue to belong to the natural sciences, it is the interface that is specifically artificial. In fact, Simon also defends that in some way certain natural organizations are in this sense artificial, at least as far as a natural system (biological) can be analyzed according to these parameters. And in The Sciences of the Artificial H. Simon does include psychology (defined as the science of behaving systems) in the sciences of the artificial: the psychology of a system is a science of the artificial in contrast, namely, with the neurophysiology of the same system, which aims exclusively at the physical interior of the system and not at the interior/exterior interface and at the purposes of the global behavior. Simon’s ‘artificial’ could be called functional in the sense of teleological - what interests us here is the fact that what makes the functional functional isn’t the fact that is characterizes artifacts but the facts that it chracterizes adaptive devices. Now, an adaptive device may be, or not, artificial in the usual sense of artificial (being made or constructed by human beings). By the way, we too are adaptive devices put together by evolution by natural seleccion – and this is exactly what Dennett uses against Searle, to fight Searle’s appeal to ‘intuitions’ and to what’s ‘original’ and genuine about our type of intentionality.

 

4. Functionalism, cognitivism and the natural / artificial dichotomy. Conclusion: D. Dennett on natural intelligence.

 

I will conclude taking Dennett as a reference. It is a fact that his defense of cognitivist funcionalism  and Strong AI involves, and this is essencial, a representational theory of the nature of the consciousness that is very different from Searle’s theory of consciousness, and also the reformulation of functionalism as an hypothesis concerning not only the brain but the whole body. What I’m interested in here, is how Dennett’s brand of cognitivism makes us look at our own mind and intelligence. To start with, for Dennett it is exactly the fact that funcionalist cognitivism does away with the distinction between natural and artificial that unifies cognitive science as a field, in that it allows us (i) to consider humans, animals and machines together as cognitive systems (ii) to assume that cognitive performances of actual, biologically based, minds are situated, with other possible types of minds, in the same design space (in fact, not only cognitive performances but also the systems to which they are due, as well as the artifacts these systems produce, for example human instruments). From here we must think in terms of thresholds, conditions of possibility for human minds, as a specific kind of minds (as opposed to something ‘original’). These thresholds are, acccording to Dennett, connected with architectures for communication and language, which render the speech act nature of human thought possible. This, allowing namely for mental acts such as endorsing and affirming one’s own beliefs through language and making voluntary decisions, is what makes human minds so much more powerful and sophisticated than those of other animals, so much so that, frequently, the work of a philosopher approaching, for instance, the problem of mind in cognitive ethology will consist in deflating interpretations of animal behavior, namely overattributions of consciousness and communication to other animals. Dennett’s cognitivist funcionalism, with its deconstructionist stance on ‘genuine intentionality’ coexists, thus, both with a deflationary conception of animal mentality and with the ideia that the distance between human minds and the minds of other species, even the most intelligent, is enormous. It is big enough, namely, to make all the difference in moral terms. Despite this, one should not claim that there is any difference in kind, since it is still a matter of cognitive architecture only, and so a difference in degree.  All this goes against Searle’s positions    but why?  I want  to answer this question by evoking, as Dennett himself evokes, Alan Turing, the english logician and mathematician. I’m interested here in Turing as a philosopher  – the author of the paper Computing Machinery and Intelligence published in the philosophical journal Mind in 1950 – rather than in the work for which he is considered one of the fathers of the computer, the creator of the concepts of Turing Machine and Universal Turing Machine. Computing Machinery and Intelligence was already then written against the critics of AI, and it is there that Turing proposes the Turing Test, the test that Searle’s Chinese Room is supposed to have ‘refuted’. The initial question is: can machines think? and the first thing Turing says is that a conceptual quarrel around the questions what is a machine? what is thinking? would lead to complications without end. The Turing Test is then suggested as a practical substitute for approaching the nature of intelligence. The Turing Test is an imitation game, one in which there is, in the original situation, an interrogator and two people, of whose gender the interrogator is unaware. The goal is to deceive the interrogator (through verbal interaction, the only one allowed). In the setting which interests us, thare’s no man and woman but rather a human and a machine. Same rules, same type of interaction. The goal is again to deceive the interrogator, this time concerning the status of machine of the machine. As we know, there’s only verbal interaction, so all that can be done to find out which is which is to formulate questions (questions about mathematical calculations, interpretation of poems, ironic comments or deciphering of metaphors, for instance). Turing’s point, the intention behind the Test, is that we should think about thought in a neutral, unbiased way – the Test  assumes that what behaves intelligently is intelligent, the implicit suggestion is, I would say, holistic and pragmatic: intelligence  is  intelligent behavior and not a special substance (for example neuronal) or an extra ingredient (for example a soul – or genuine intentionality). 

In the remaining of the paper Turing answers objections against the possibility of AI, all of which are still around today: he calls them the theological objection (according to which intelligence is due to a soul, only possessed by human beings), the heads in the sand objection (one expects that AI does not come into being or it would be terrible), the mathematical objection, evoking Godel’s theorem (according to which the human beings have mental capacities which exceed what is computable), the feeling and consciouness objection (according to which a machine could not have states such as depression, love, emotion, etc), the incapacities objection (according to which a machine would never be capable of mood, learning, morality, passion), etc. He also considers the argument according a machine does only what it is programmed to do, never originates anything new, the argument of the continuity of the nervous system. Turing analyzes these lines of objection  one by one and continues to recommend the test against any apriorist verdict or argument.

 The conceptual points I want to make as a minimal conclusion of this brief tour of cognitivism grow out of Dennett’s endorsement of Turing’s strategy.  If we side with Turing and Dennett against Searle’s appeal to intuition and to something ‘original’ and unique about our minds, what we have is the following (i) we should not be apriorists in defining intelligence, (ii) the frontiers between machines and non machines, between thinking and non thinking, may be fuzzier than what we would intuitively think, (iii) even though we are mental beings, and so, in a way, incorrigible in accessing our own minds, our intuition is not necessarily an infallible guide in thinking about the nature of the mental. 

 

References

 

ANDLER, Daniel, 1992, Introduction aux sciences cognitives, Paris, Gallimard.

BODEN, Margaret (ed.), 1996, The Philosophy of Artificial Intelligence, Oxford, Oxford University Press.

CHALMERS, David, 1996, The Conscious Mind, Oxford, Oxford University Press.

DESCOMBES, Vincent, 1995, La denrée mentale, Paris, Minuit, 1995.

DREYFUS, Hubert, 1965, Alchemy and Artificial Intelligence, Rand Corporation Report, Santa Monica, California.

DREYFUS, Hubert, 1972, What Computers Can’t Do, New York, Harper & Row.

FODOR, Jerry, 1975, The Language of Thought, Cambridge MA, Harvard University Press.

HAUGELAND, John (ed), 1981, Mind Design, Cambridge MA, MIT Press.

MINSKY, Marvin, 1985, The Society of Mind, New York, Simon &Schuster.

NEWELL, Allen, 1990, Unified Theories of Cognition, Cambridge MA, Harvard University Press.

PUTNAM, Hilary, 1960, Minds and Machines, in Philosophical Papers, vol. 2, Cambridge, Cambridge University Press, 1975.

PUTNAM, Hilary, 1967, The Nature of Mental States, in Philosophical Papers, vol. 2, Cambridge, Cambridge University Press, 1975 (first published as Psychological Predicates).

SEARLE, John, 1980, Minds, Brains and Programs, Behavioral and Brain Sciences, 3 (3).

SEARLE, John, 1992, The Rediscovery of the Mind, Cambridge MA, MIT Press.

SIMON, Herbert, 1969, The Sciences of the Artificial, Cambridge Mass., MIT Press.

TURING, Alan, Computing Machinery and Intelligence (1950), in DENNETT e HOFSTADTER (eds.), 1981,The Mind’s I, New York, Bantam Books.