Kernion Courses cover logo

19 - The Mind as Computer 2

31m · Kernion Courses · 06 May 07:09

The Basic 'Problem' of Representation/Intentionality/Semantics/etc

We take brain states and utterances and paintings and to represent things. What’s that about?

Philosophers throw around a lot of terms/jargon when talking about this basic phenomenon:

  • Representation
  • Intentionality
  • Semantics
  • Meaning
  • Content
  • Aboutness
  • (many more!)

The basic idea: there are thinkers/reasoners in who communicate to one another, and can represent the world as one way rather than another, to both themselves (e.g. in deliberative thought) and others (e.g. in speech).

What's all this business about interpreting symbols/signals to mean one thing rather than another? What distinguishes a cognitive system that truly represents the world, and cognitive systems (if they can be called that) that just push electrical signals around?

Searle's Setup

What is Searle trying to show? He wants to show that running a computer program can’t be sufficient for having a mind/thinking. (Perhaps it is necessary).

Syntax vs. Semantics

  • “It is essential to our conception of a digital computer that its operations can be specified purely formally; that is, we specify the steps in the operation of the computer in terms of abstract symbols.” (670)

  • “But the symbols have no meaning; they have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure.” (670)

Duplication vs. Simulation

  • “Those features [consciousness, thoughts, feelings, emotions], by definition, the computer is unable to duplicate however powerful may be its ability to simulate. ...No simulation ever constitutes duplication.” (673)

  • “...nobody supposes that the computer simulation is actually the real thing; no one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down.” (673)

The Chinese Room Thought Experiment

  • “Suppose I’m locked in a room and suppose that I’m given a large batch of Chinese writing.” He is given batches of Chinese writing and batches of ‘rules’ which “instruct me how I am to give back certain Chinese symbols with certain sorts of shapes in response”.
  • “From the external point of view...the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer.”

(Above two quotes come from an earlier version of this argument, presented in "Minds, Brains and Computers")

The Point: The room, as a whole, spits out meaningful Chinese sentences. But the man does not
understand Chinese.

  • “...You behave exactly as if you understand Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese.” (671)

The Systems Reply: There’s an important disanalogy: the man ought not be compared to the computer since the computer is analogous to the whole system—the rules books, the data banks of Chinese symbols, etc. The man is more like the CPU, where the rule book is like the program stored in memory. So perhaps the whole system does understand Chinese.

Searle’s Response: My point applies even if the man applies the rules to internal, memorized rule books. Neither the system nor the man understands Chinese.

An Additional Reply: There are actually two ‘programs’ running on the same ‘hardware’, the man. There are really two minds instantiated in the man. The Chinese program, instantiated in the man, understands Chinese, but the man does not. And the English program certainly doesn’t understand Chinese.

Why I Don't Like Searle's 'Argument'

1. The unearned syntax/semantics cudgel

Wait--is this even an argument?

Basic problem: beats you over the head with “it’s mere syntax!” without really providing a positive account of what semantic content is, and why that semantic content has to be something over and above pure syntax.

Searle makes super clear that he believes you can't successfully reduce semantic truths to 'mere' syntactical truths, but I don't see much of an attempt to argue this point, rather than repeat the same basic idea over and over again.

If there's something more to the view than "isn't it obvious!", then he didn't include that reasoning here. Besides, this is an actively debated issue!

2. The proposed 'solution' doesn't really make sense?

  • The problem, remember: we need an ingredient over and above purely syntactical operations to imbue a system with real intentionality.
  • Searle's ultimate suggestion: Our biological makeup must play that role?
  • Basic problem: Wait, what? How? That is, what makes moist computers any better off intentionality-wise than functionally similar system made of less moist material? Non sequitor.

3. Characterizes the computer model of the mind uncharitably

Searle leans into the familiar “digital computer” sense of “computer model...”, but that's not really warranted, given what proponents of that model actually believe. The charitable thing he could have said: there’s a sense of computation which isn’t tied to a specific artifact, but to a broader idea of information-processing and symbol manipulation as part of an individual system's strategy for acting upon the world and making decisions.

4. Stylistically: sloppy, blustery, arrogant, dismissive...

Taking philosophy seriously means taking your own temperament when reasoning seriously. And the right temperament is not the one on display in this paper.

The episode 19 - The Mind as Computer 2 from the podcast Kernion Courses has a duration of 31:57. It was first published 06 May 07:09. The cover art and the content belong to their respective owners.

More episodes from Kernion Courses

22 - The Singularity (w/ Special Guest David Chalmers)

The core line of reasoning focuses on an argument which I'll distill as...

 

1. Humans can and will create a cognitive system that outstrips human cognition.

2. Such a system will be able to improve it's own cognitive abilities better/faster than humans can...

C. And so: once humans create a superhuman cognitive system, we enter into a self-reinforcing loop, whereby the system automatically gets smarter and smarter without human intervention.

21 - True Believers and A Recipe For Thought

Dennett, "True Believers", and Dretske, "A Recipe For Thought"

20 - Mental Representation and Eliminative Materialism

In Philosophy, it’s useful to sometimes step back and ask yourself: ‘Wait, what exactly are we talking about? Can I get some clear examples of the thing I’m trying to understand?’

This lecture:

  1. Scene setting, filling in some historical information, and honing in on the target of inquiry...
  2. Brief characterization of so-called “eliminative materialism”

Characterizing the Target Phenomena...

...in phenomenal terms (Unit 2): ‘feels’, ‘sensation’, ‘experience’, ‘hurts’, etc.

...in cognitive terms (Unit 3): ‘think’, ‘notice’, ‘wonder’, ‘consider’, ‘decide’, ‘goals’, ‘want’, ‘believes’, etc.

  • Note: In everyday thought/talk about the mental, we often mix and match ideas/terms from these two lists. And some terms (e.g. ‘see’, ‘perceive’, ‘sense’, ‘desires’, ‘hopes’) maybe mean different things, depending on the context in which the term in used.
  • (For the canonical characterization of this division, see Chapter 1 of Chalmers’ “The Conscious Mind”.)

“Folk Psychology”

In everyday thought/talk, we seem to use a basic belief/desire/rationality model for understanding one another and for predicting each other’s behavior.

  • “Each of us understands others, as well as we do, because we share a tacit command of an integrated body of lore concerning the law-like relations holding among external circumstances, internal states, and overt behavior. Given its nature and functions, this body of lore may quite aptly be called "folk psychology.”

“The Propositional Attitudes”

When philosophers of mind and language have tried to unpack and analyze our pre-theroetical target, they often focus on something they call propositions: abstract ‘claims’/’sentences’/‘ideas’ that get expressed in our cognitive attitudes.

  • Importantly, propositions (according to many philosophers) express ways the world may or may not be. You can think of them as ‘statements’ that are potentially true/false.

Cognitive attitudes...

  • Believe that...
  • Desire that...
  • Imagine that...
  • Think that...
  • Hope that....
  • (many more...)

The So-Called ‘Cognitive Revolution’

The so-called “computer model of the mind” can be thought of as directly tied to the so-called ‘cognitive revolution’ in psychology and it’s neighbors (mainly CS/AI and philosophy).

  • Psychology: The triumph of the 'cognitivists' over ‘methodological behaviorists’ in psychology (cf. Noam Chomsky’s ‘poverty of the stimulus’ argument)
  • CS: The rise of earnest attempts at ‘strong AI’
  • Philosophy: The rise of science-loving functionalism in philosophy of mind.

To some academics, this had the feeling of “the end of history", with ‘the computer model of the mind’ as the inevitable, final stop in a long march towards understanding the mind.

Viva La Revolution?

But it’s never that simple...

  • In psychology: rise of social psychology, ‘cognitive biases’, and the emergance of alternatives to the standard computer model (e.g. parallel distributed processing.)
  • In CS/AI: AI Winter!
  • In philosophy: although functionalism became popular, it got attacked from all sides, and most self-ascribed functionalists have thought that they can’t really answer their opponents

(Aside: I think the functionalists gave up too easy and can win the war with another sustained attempt to build out the theory...)

Gallistel on Representation and Computation

One key idea that I hope is brought out by the Gallistel reading: linking the problem of representation/intentionality to the problem of cognition/intelligence. A ton is packed into the abstract:

“A mental representation is a system of symbols isomorphic to some aspect of the environment, used to make behavior-generating decisions that anticipate events and relations in that environment. A representational system has the following components: symbols, which refer to aspects of the environment, symbol processing operations, which generate symbols representing behaviorally required information about the environment by transforming and combining other symbols, representing computationally related information, sensing and measuring processes, which relate the symbolic variables to the aspects of the world to which they refer, and decision processes, which translate decision variables into observable actions. From a behaviorist perspective, mental representations do not exist and cannot be the focus of a scientfic psychology. From a cognitivist perspective, psychology is the study of mental representations, how they are computed and how they affect behavior.”

The Upshot: Representations emerge in nature when computation emerges in nature. Representations are then to be understood as embedded in natural systems, which implement various cognitive strategies. In short, the cognitive goals of natural cognitive systems are either: a) explanatorily prior to representations, or b) conceptually linked to the nature of cognition.

“Although mental representations are central to prescientific folk psychology, folk psychology does not provide a rigorous definition of representation, any more than folk physics provides a rigorous definition of mass and energy. Although mental representations are central to prescientific folk psychology, folk psychology does not provide a rigorous definition of representation, any more than folk physics provides a rigorous definition of mass and energy.”

“A representation, mental or otherwise, is a system of symbols. The system of symbols is isomorphic to another system (the represented system) so that conclusions drawn through the processing of the symbols in the representing system constitute valid inferences about the represented system.”

Cognitive states are often/usually (?) representational states, and so any story about cognition ought to be accompanied by a story about representation. Otherwise, you wouldn’t be able to distinguish, say, the belief that it is raining outside from the belief that the Taj Mahal is white. There’s a cognitive difference to be accounted for here in terms of a difference in representational content.

My preferred framing...

The problem of consciousness: What makes human brains (or anything else) ‘give rise’ to phenomenal experience?

The problem of cognition/rationality/intelligence: What is it about a cognitive system that makes it truly think?

The problem of representation/intentionality: What is it about a cognitive system that makes its internal states represent ‘the world out there’?

Perhaps the problem of cognition and the problem of representation, in order to be properly appreciated, have to be handled separately. But they might also be the very same problem ‘deep down’. There is nothing like ‘expert consensus’ on the issue.

As I read Churchland and Dennett, they both think that cognition and representation are connected in some deep and important way...

Churchland will say: Yes, representation-talk is tied to cognition-talk. But these are bad, confused ways of talking! Drop it!

  • The Big Problem: has a kind of self-undermining quality to it, since Churchland seems to be saying both “You can understand these sentences as expressing true thoughts” and “Sentences which rely (either implicitly or explicitly) on [the rational agent model] are, strictly speaking, false, because rational agents don’t exist.”

Dennett will say: Must we choose between realism and anti-realism? Can’t we just say that different ways of thinking about the same underlying thing are useful in different ways, and leave it at that?

  • The Big Problem: Dennett seems to want to have his cake and eat it too!

Churchland’s Eliminative Materialism

[illusionism : consciousness :: eliminative materialism : cognition/representation]

“Eliminative materialism is the thesis that our common-sense conception of psychological phenomena constitutes a radically false theory”

Folk psychology represents a kind of ‘common sense theory’ which advanced neurobiology/psychology will/should completely replace.

And so folk psychology, according to Churchland, should be treated like any other ‘common sense' theory that gets thrown away once we develop a more sophisticated theory.

“...the semantics of the terms in our familiar mentalistic vocabulary is to be understood in the same manner as the semantics of theoretical terms generally: the meaning of any theoretical term is fixed or constituted by the network of laws in which it figures.”

“Recall the view that space has a preferred direction in which all things fall; that weight is an intrinsic feature of a body; that a force-free moving object will promptly return to rest; that the sphere of the heavens turns daily; and so on.”

Upshot: While we may still talk/think in terms of ‘beliefs’ and ‘desires’ and ‘rationality’, such talk is, strictly speaking, false.

19 - The Mind as Computer 2

The Basic 'Problem' of Representation/Intentionality/Semantics/etc

We take brain states and utterances and paintings and to represent things. What’s that about?

Philosophers throw around a lot of terms/jargon when talking about this basic phenomenon:

  • Representation
  • Intentionality
  • Semantics
  • Meaning
  • Content
  • Aboutness
  • (many more!)

The basic idea: there are thinkers/reasoners in who communicate to one another, and can represent the world as one way rather than another, to both themselves (e.g. in deliberative thought) and others (e.g. in speech).

What's all this business about interpreting symbols/signals to mean one thing rather than another? What distinguishes a cognitive system that truly represents the world, and cognitive systems (if they can be called that) that just push electrical signals around?

Searle's Setup

What is Searle trying to show? He wants to show that running a computer program can’t be sufficient for having a mind/thinking. (Perhaps it is necessary).

Syntax vs. Semantics

  • “It is essential to our conception of a digital computer that its operations can be specified purely formally; that is, we specify the steps in the operation of the computer in terms of abstract symbols.” (670)

  • “But the symbols have no meaning; they have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure.” (670)

Duplication vs. Simulation

  • “Those features [consciousness, thoughts, feelings, emotions], by definition, the computer is unable to duplicate however powerful may be its ability to simulate. ...No simulation ever constitutes duplication.” (673)

  • “...nobody supposes that the computer simulation is actually the real thing; no one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down.” (673)

The Chinese Room Thought Experiment

  • “Suppose I’m locked in a room and suppose that I’m given a large batch of Chinese writing.” He is given batches of Chinese writing and batches of ‘rules’ which “instruct me how I am to give back certain Chinese symbols with certain sorts of shapes in response”.
  • “From the external point of view...the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer.”

(Above two quotes come from an earlier version of this argument, presented in "Minds, Brains and Computers")

The Point: The room, as a whole, spits out meaningful Chinese sentences. But the man does not
understand Chinese.

  • “...You behave exactly as if you understand Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese.” (671)

The Systems Reply: There’s an important disanalogy: the man ought not be compared to the computer since the computer is analogous to the whole system—the rules books, the data banks of Chinese symbols, etc. The man is more like the CPU, where the rule book is like the program stored in memory. So perhaps the whole system does understand Chinese.

Searle’s Response: My point applies even if the man applies the rules to internal, memorized rule books. Neither the system nor the man understands Chinese.

An Additional Reply: There are actually two ‘programs’ running on the same ‘hardware’, the man. There are really two minds instantiated in the man. The Chinese program, instantiated in the man, understands Chinese, but the man does not. And the English program certainly doesn’t understand Chinese.

Why I Don't Like Searle's 'Argument'

1. The unearned syntax/semantics cudgel

Wait--is this even an argument?

Basic problem: beats you over the head with “it’s mere syntax!” without really providing a positive account of what semantic content is, and why that semantic content has to be something over and above pure syntax.

Searle makes super clear that he believes you can't successfully reduce semantic truths to 'mere' syntactical truths, but I don't see much of an attempt to argue this point, rather than repeat the same basic idea over and over again.

If there's something more to the view than "isn't it obvious!", then he didn't include that reasoning here. Besides, this is an actively debated issue!

2. The proposed 'solution' doesn't really make sense?

  • The problem, remember: we need an ingredient over and above purely syntactical operations to imbue a system with real intentionality.
  • Searle's ultimate suggestion: Our biological makeup must play that role?
  • Basic problem: Wait, what? How? That is, what makes moist computers any better off intentionality-wise than functionally similar system made of less moist material? Non sequitor.

3. Characterizes the computer model of the mind uncharitably

Searle leans into the familiar “digital computer” sense of “computer model...”, but that's not really warranted, given what proponents of that model actually believe. The charitable thing he could have said: there’s a sense of computation which isn’t tied to a specific artifact, but to a broader idea of information-processing and symbol manipulation as part of an individual system's strategy for acting upon the world and making decisions.

4. Stylistically: sloppy, blustery, arrogant, dismissive...

Taking philosophy seriously means taking your own temperament when reasoning seriously. And the right temperament is not the one on display in this paper.

18 - The Mind as Computer 1

Block’s “Psychologism and Behaviorism” has two aims: 1) hone in on the best version of behaviorism about intelligence, 2) show that even this very best behaviorist account of intelligence fails (for broadly ‘functionalist’ reasons).

A terminological disclaimer...

Block’s preferred philosophical terminology is idiosyncratic...

  • Block’s “psychologism” (about cognition/intelligence) = our “functionalism” (about cognition/intelligence)
    • “Let psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it.”
  • Block’s “functionalism” = our “conceptual functionalism” (about all mental states)

Block’s Neo-Turing Test

Block’s description of the original Turing Test:

  • “The Turing Test involves a machine in one room, and a person in another, each responding by teletype to remarks made by a human judge in a third room for some fixed period of time, e.g., an hour. The machine passes the test just in case the judge cannot tell which are the machine's answers and which are those of the person.”

And for what is the Turing Test a test? Intelligence!

  • Caveat: “Note that the sense of ’intelligent’ deployed here--and generally in discussion of the Turing Test--is not the sense in which we speak of one person being more intelligent than another. ‘Intelligence’ in the sense deployed here means something like the possession of thought or reason.”

So to a first approximation, the conception of intelligence we find embedded in the Turing Test...

i. Intelligence just is the ability to pass the Turing Test (if it is given). (‘crude operationalism’)

Basic problem: Measuring instruments are fallible, so we shouldn’t confuse measurements for the thing being measured.

  • No Turing Test judge will be infallible. Or, rather, we shouldn’t want our test to vary, depending on who happens to be the judge.

Initial solution: Put it in terms of behavioral dispositions instead...

  • You can fail the test in some weird corner case while still having the general disposition to pass the test in most situations/scenarios. A single failure should not be conclusive evidence against a system having real intelligence.

ii. Intelligence just is the behavioral disposition to pass the Turing Test (if it is given). (Familiar Rylean Behaviorism)

  • Basic problem: “In sum, human judges may be unfairly chauvinist in rejecting genuinely intelligent machines, and they may be overly liberal in accepting cleverly-engineered, mindless machines.”

  • Initial solution: Replace the imitation game with a simpler game of ‘simply produce sensible verbal responses’

    • Ex: Compare two responses to the question, “So what have you been thinking about?”: 1) “I guess you could say that I’ve been thinking about everything...or maybe nothing? it’s just so gosh darn boring to be stuck inside not talking to anyone.”, 2) “A contagious symphony waltzed past my window. The third of february frowned while I stung a dagger.”
    • Note: this move away from the imitation game a a simpler “are these verbal responses sensible?” test drastically lowers the bar. But (according to Block) we’ll see that behaviorists can’t even clear this lowered bar.

iii. “Intelligence (or more accurately, conversational intelligence) is the disposition to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.”

Basic problem: The standard functionalist objections to behaviorism.

  • In particular, the ‘perfect actor’ objection (think Putnam’s super-super spartans) shows that a determined deceiver could fool a test.

But! And here’s the crucial point:

  • “As mentioned earlier, there are all sorts of reasons why an intelligent system may fail to be disposed to act intelligently: believing that acting intelligently is not in its interest, paralysis, etc. But intelligent systems that do not want to act intelligently or are paralyzed still have the capacity to act intelligently, even if they do not or cannot exercise this capacity.”

Upshot: The behavioral tests are maybe not necessary for intelligence, but they’re perhaps sufficient!

iv. “Intelligence (or, more accurately, conversational intelligence) is the capacity to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.” (Block’s Neo-Turing Test for Intelligence)

Perhaps behaviorism gives the right account of intelligence, even if not the right account for other kinds of mental states/properties...

Block against Block’s Neo-Turing Test

Block’s objection to the Neo-Turing Test’s conception of intelligence comes down to a fairly simple claim: we can conceive of a system that has the capacity to produce perfectly sensible outputs, without itself producing those outputs intelligently

On-the-fly reasoning is more sophisticated than a simple lookup table--And it’s that more sophisticated thing that we’re trying to characterize.

The Conceivability Test

  • “The set of sensible strings so defined is a finite set that could in principle be listed by a very large and clever team working for a long time, with a very large grant and a lot of mechanical help, exercising imagination and judgment about what is to count as a sensible string.”
  • “Imagine the set of sensible strings recorded on tape and deployed by a very simple machine as follows. The interrogator types in sentence A. The machine searches its list of sensible strings, picking out those that begin with A. It then picks one of these A-initial strings at random, and types out its second sentence, call it ``B''. The interrogator types in sentence C. The machine searches its list, isolating the strings that start with A followed by B followed by C. It picks one of these ABC-initial strings and types out its fourth sentence, and so on.”
  • “...such a machine will have the capacity to emit a sensible sequence of verbal outputs, whatever the verbal inputs, and hence it is intelligent according to the neo-Turing Test conception of intelligence. But actually, the machine has the intelligence of a toaster. All the intelligence it exhibits is that of its programmers.”
  • “I conclude that the capacity to emit sensible responses is not sufficient for intelligence, and so the neo-Turing Test conception of intelligence is refuted (along with the older and cruder Turing Test conceptions).”

The Upshot: The Cognitive/Rational/Intelligent Mind as a Computer

Our concept of intelligence involves something more than just the ‘external’ pattern of sensory inputs and behavioral outputs. The internal pattern of information processing that produces those inputs/outputs also matters. Those patterns need to be produced in the right way--in a way that looks something like ‘abstract rational thought’ rather than some dumb lookup table.

TL;DR: functional organization matters to intelligence!

Objections to Block

Objection 1: “Your argument is too strong in that it could be said of any intelligent machine that the intelligence it exhibits is that of its programmers.“

Block’s Reply: “The trouble with the neo-Turing Test conception of intelligence (and its predecessors) is precisely that it does not allow us to distinguish between behavior that reflects a machine's own intelligence, and behavior that, reflects only the intelligence of the machine's programmers.”

Objection 3(ish): This is a merely verbal dispute. You insist that the concept of intelligence ‘includes something more’ than mere input/output patterns. But unless you can specify what this extra special ingredient is, you’re just helping yourself to a magical (potentially-spooky?) conception of ‘intelligence’!

Block’s Reply: “...my point is based on the sort of information processing difference that exists.” All you need to grant is that there is an interesting and worthwhile difference between lookup-table algorithms and ‘on-the-fly’ reasoning algorithms. Call it whatever you want. That seems to be something we care about when we talk about ‘intelligence’.

Objection 6(ish): Are you sure what you’ve described is actually conceivable? Don’t you run into a problem where the physical size of your lookalike intelligence would have to be larger than the size of the physical universe?

Block’s Reply: “My argument requires only that the machine be logically possible, not that it be feasible or even nomologically possible. Behaviorist analyses were generally presented as conceptual analyses, and it is difficult to see how conceptions such as the neo-Turing Test conception could be seen in a very different light.”

My Objection (?): Doesn’t “actual and potential bahavior”--the sort of thing that a Rylean logical behaviorist likes--ultimately stem from the pattern of ‘internal’ infromation processing? That is, the two always in fact go together. And so for any postulated “lookalike” intelligence, so long as it’s finite (which it has to be), we can dream up scenarios in which it would malfunction in a telling way, or otherwise fail to demonstrate the full range of cognitive flexibility that a ‘real’ intelligence can exhibit.

A reply (?): Remember that Block’s point is conceptual, rather than empirical. Those two things (input/output patterns vs. information-processing patterns) may in fact be tightly connected. But that tight connection would be explained via the information-processing conception

Every Podcast » Kernion Courses » 19 - The Mind as Computer 2