Kernion Courses cover logo
RSS Feed Apple Podcasts Overcast Castro Pocket Casts
English
Non-explicit
simplecast.com
29:51

We were unable to update this podcast for some time now. As a result, the information shown here might be outdated. If you are the owner of the podcast, you can validate that your RSS feed is available and correct.

It looks like this podcast has ended some time ago. This means that no new episodes have been added some time ago. If you're the host of this podcast, you can check whether your RSS file is reachable for podcast clients.

Kernion Courses

by Jackson Kernion

Audio from Jackson Kernion's courses

Copyright: 2020 Jackson Kernion

Episodes

22 - The Singularity (w/ Special Guest David Chalmers)

52m · Published 16 May 00:23

The core line of reasoning focuses on an argument which I'll distill as...

 

1. Humans can and will create a cognitive system that outstrips human cognition.

2. Such a system will be able to improve it's own cognitive abilities better/faster than humans can...

C. And so: once humans create a superhuman cognitive system, we enter into a self-reinforcing loop, whereby the system automatically gets smarter and smarter without human intervention.

21 - True Believers and A Recipe For Thought

33s · Published 16 May 00:18
Dennett, "True Believers", and Dretske, "A Recipe For Thought"

20 - Mental Representation and Eliminative Materialism

49m · Published 11 May 22:22

In Philosophy, it’s useful to sometimes step back and ask yourself: ‘Wait, what exactly are we talking about? Can I get some clear examples of the thing I’m trying to understand?’

This lecture:

  1. Scene setting, filling in some historical information, and honing in on the target of inquiry...
  2. Brief characterization of so-called “eliminative materialism”

Characterizing the Target Phenomena...

...in phenomenal terms (Unit 2): ‘feels’, ‘sensation’, ‘experience’, ‘hurts’, etc.

...in cognitive terms (Unit 3): ‘think’, ‘notice’, ‘wonder’, ‘consider’, ‘decide’, ‘goals’, ‘want’, ‘believes’, etc.

  • Note: In everyday thought/talk about the mental, we often mix and match ideas/terms from these two lists. And some terms (e.g. ‘see’, ‘perceive’, ‘sense’, ‘desires’, ‘hopes’) maybe mean different things, depending on the context in which the term in used.
  • (For the canonical characterization of this division, see Chapter 1 of Chalmers’ “The Conscious Mind”.)

“Folk Psychology”

In everyday thought/talk, we seem to use a basic belief/desire/rationality model for understanding one another and for predicting each other’s behavior.

  • “Each of us understands others, as well as we do, because we share a tacit command of an integrated body of lore concerning the law-like relations holding among external circumstances, internal states, and overt behavior. Given its nature and functions, this body of lore may quite aptly be called "folk psychology.”

“The Propositional Attitudes”

When philosophers of mind and language have tried to unpack and analyze our pre-theroetical target, they often focus on something they call propositions: abstract ‘claims’/’sentences’/‘ideas’ that get expressed in our cognitive attitudes.

  • Importantly, propositions (according to many philosophers) express ways the world may or may not be. You can think of them as ‘statements’ that are potentially true/false.

Cognitive attitudes...

  • Believe that...
  • Desire that...
  • Imagine that...
  • Think that...
  • Hope that....
  • (many more...)

The So-Called ‘Cognitive Revolution’

The so-called “computer model of the mind” can be thought of as directly tied to the so-called ‘cognitive revolution’ in psychology and it’s neighbors (mainly CS/AI and philosophy).

  • Psychology: The triumph of the 'cognitivists' over ‘methodological behaviorists’ in psychology (cf. Noam Chomsky’s ‘poverty of the stimulus’ argument)
  • CS: The rise of earnest attempts at ‘strong AI’
  • Philosophy: The rise of science-loving functionalism in philosophy of mind.

To some academics, this had the feeling of “the end of history", with ‘the computer model of the mind’ as the inevitable, final stop in a long march towards understanding the mind.

Viva La Revolution?

But it’s never that simple...

  • In psychology: rise of social psychology, ‘cognitive biases’, and the emergance of alternatives to the standard computer model (e.g. parallel distributed processing.)
  • In CS/AI: AI Winter!
  • In philosophy: although functionalism became popular, it got attacked from all sides, and most self-ascribed functionalists have thought that they can’t really answer their opponents

(Aside: I think the functionalists gave up too easy and can win the war with another sustained attempt to build out the theory...)

Gallistel on Representation and Computation

One key idea that I hope is brought out by the Gallistel reading: linking the problem of representation/intentionality to the problem of cognition/intelligence. A ton is packed into the abstract:

“A mental representation is a system of symbols isomorphic to some aspect of the environment, used to make behavior-generating decisions that anticipate events and relations in that environment. A representational system has the following components: symbols, which refer to aspects of the environment, symbol processing operations, which generate symbols representing behaviorally required information about the environment by transforming and combining other symbols, representing computationally related information, sensing and measuring processes, which relate the symbolic variables to the aspects of the world to which they refer, and decision processes, which translate decision variables into observable actions. From a behaviorist perspective, mental representations do not exist and cannot be the focus of a scientfic psychology. From a cognitivist perspective, psychology is the study of mental representations, how they are computed and how they affect behavior.”

The Upshot: Representations emerge in nature when computation emerges in nature. Representations are then to be understood as embedded in natural systems, which implement various cognitive strategies. In short, the cognitive goals of natural cognitive systems are either: a) explanatorily prior to representations, or b) conceptually linked to the nature of cognition.

“Although mental representations are central to prescientific folk psychology, folk psychology does not provide a rigorous definition of representation, any more than folk physics provides a rigorous definition of mass and energy. Although mental representations are central to prescientific folk psychology, folk psychology does not provide a rigorous definition of representation, any more than folk physics provides a rigorous definition of mass and energy.”

“A representation, mental or otherwise, is a system of symbols. The system of symbols is isomorphic to another system (the represented system) so that conclusions drawn through the processing of the symbols in the representing system constitute valid inferences about the represented system.”

Cognitive states are often/usually (?) representational states, and so any story about cognition ought to be accompanied by a story about representation. Otherwise, you wouldn’t be able to distinguish, say, the belief that it is raining outside from the belief that the Taj Mahal is white. There’s a cognitive difference to be accounted for here in terms of a difference in representational content.

My preferred framing...

The problem of consciousness: What makes human brains (or anything else) ‘give rise’ to phenomenal experience?

The problem of cognition/rationality/intelligence: What is it about a cognitive system that makes it truly think?

The problem of representation/intentionality: What is it about a cognitive system that makes its internal states represent ‘the world out there’?

Perhaps the problem of cognition and the problem of representation, in order to be properly appreciated, have to be handled separately. But they might also be the very same problem ‘deep down’. There is nothing like ‘expert consensus’ on the issue.

As I read Churchland and Dennett, they both think that cognition and representation are connected in some deep and important way...

Churchland will say: Yes, representation-talk is tied to cognition-talk. But these are bad, confused ways of talking! Drop it!

  • The Big Problem: has a kind of self-undermining quality to it, since Churchland seems to be saying both “You can understand these sentences as expressing true thoughts” and “Sentences which rely (either implicitly or explicitly) on [the rational agent model] are, strictly speaking, false, because rational agents don’t exist.”

Dennett will say: Must we choose between realism and anti-realism? Can’t we just say that different ways of thinking about the same underlying thing are useful in different ways, and leave it at that?

  • The Big Problem: Dennett seems to want to have his cake and eat it too!

Churchland’s Eliminative Materialism

[illusionism : consciousness :: eliminative materialism : cognition/representation]

“Eliminative materialism is the thesis that our common-sense conception of psychological phenomena constitutes a radically false theory”

Folk psychology represents a kind of ‘common sense theory’ which advanced neurobiology/psychology will/should completely replace.

And so folk psychology, according to Churchland, should be treated like any other ‘common sense' theory that gets thrown away once we develop a more sophisticated theory.

“...the semantics of the terms in our familiar mentalistic vocabulary is to be understood in the same manner as the semantics of theoretical terms generally: the meaning of any theoretical term is fixed or constituted by the network of laws in which it figures.”

“Recall the view that space has a preferred direction in which all things fall; that weight is an intrinsic feature of a body; that a force-free moving object will promptly return to rest; that the sphere of the heavens turns daily; and so on.”

Upshot: While we may still talk/think in terms of ‘beliefs’ and ‘desires’ and ‘rationality’, such talk is, strictly speaking, false.

19 - The Mind as Computer 2

31m · Published 06 May 07:09

The Basic 'Problem' of Representation/Intentionality/Semantics/etc

We take brain states and utterances and paintings and to represent things. What’s that about?

Philosophers throw around a lot of terms/jargon when talking about this basic phenomenon:

  • Representation
  • Intentionality
  • Semantics
  • Meaning
  • Content
  • Aboutness
  • (many more!)

The basic idea: there are thinkers/reasoners in who communicate to one another, and can represent the world as one way rather than another, to both themselves (e.g. in deliberative thought) and others (e.g. in speech).

What's all this business about interpreting symbols/signals to mean one thing rather than another? What distinguishes a cognitive system that truly represents the world, and cognitive systems (if they can be called that) that just push electrical signals around?

Searle's Setup

What is Searle trying to show? He wants to show that running a computer program can’t be sufficient for having a mind/thinking. (Perhaps it is necessary).

Syntax vs. Semantics

  • “It is essential to our conception of a digital computer that its operations can be specified purely formally; that is, we specify the steps in the operation of the computer in terms of abstract symbols.” (670)

  • “But the symbols have no meaning; they have no semantic content; they are not about anything. They have to be specified purely in terms of their formal or syntactical structure.” (670)

Duplication vs. Simulation

  • “Those features [consciousness, thoughts, feelings, emotions], by definition, the computer is unable to duplicate however powerful may be its ability to simulate. ...No simulation ever constitutes duplication.” (673)

  • “...nobody supposes that the computer simulation is actually the real thing; no one supposes that a computer simulation of a storm will leave us all wet, or a computer simulation of a fire is likely to burn the house down.” (673)

The Chinese Room Thought Experiment

  • “Suppose I’m locked in a room and suppose that I’m given a large batch of Chinese writing.” He is given batches of Chinese writing and batches of ‘rules’ which “instruct me how I am to give back certain Chinese symbols with certain sorts of shapes in response”.
  • “From the external point of view...the answers to the Chinese questions and the English questions are equally good. But in the Chinese case, unlike the English case, I produce the answers by manipulating uninterpreted formal symbols. As far as the Chinese is concerned, I simply behave like a computer.”

(Above two quotes come from an earlier version of this argument, presented in "Minds, Brains and Computers")

The Point: The room, as a whole, spits out meaningful Chinese sentences. But the man does not
understand Chinese.

  • “...You behave exactly as if you understand Chinese, but all the same you don’t understand a word of Chinese. But if going through the appropriate computer program for understanding Chinese is not enough to give you an understanding of Chinese, then it is not enough to give any other digital computer an understanding of Chinese.” (671)

The Systems Reply: There’s an important disanalogy: the man ought not be compared to the computer since the computer is analogous to the whole system—the rules books, the data banks of Chinese symbols, etc. The man is more like the CPU, where the rule book is like the program stored in memory. So perhaps the whole system does understand Chinese.

Searle’s Response: My point applies even if the man applies the rules to internal, memorized rule books. Neither the system nor the man understands Chinese.

An Additional Reply: There are actually two ‘programs’ running on the same ‘hardware’, the man. There are really two minds instantiated in the man. The Chinese program, instantiated in the man, understands Chinese, but the man does not. And the English program certainly doesn’t understand Chinese.

Why I Don't Like Searle's 'Argument'

1. The unearned syntax/semantics cudgel

Wait--is this even an argument?

Basic problem: beats you over the head with “it’s mere syntax!” without really providing a positive account of what semantic content is, and why that semantic content has to be something over and above pure syntax.

Searle makes super clear that he believes you can't successfully reduce semantic truths to 'mere' syntactical truths, but I don't see much of an attempt to argue this point, rather than repeat the same basic idea over and over again.

If there's something more to the view than "isn't it obvious!", then he didn't include that reasoning here. Besides, this is an actively debated issue!

2. The proposed 'solution' doesn't really make sense?

  • The problem, remember: we need an ingredient over and above purely syntactical operations to imbue a system with real intentionality.
  • Searle's ultimate suggestion: Our biological makeup must play that role?
  • Basic problem: Wait, what? How? That is, what makes moist computers any better off intentionality-wise than functionally similar system made of less moist material? Non sequitor.

3. Characterizes the computer model of the mind uncharitably

Searle leans into the familiar “digital computer” sense of “computer model...”, but that's not really warranted, given what proponents of that model actually believe. The charitable thing he could have said: there’s a sense of computation which isn’t tied to a specific artifact, but to a broader idea of information-processing and symbol manipulation as part of an individual system's strategy for acting upon the world and making decisions.

4. Stylistically: sloppy, blustery, arrogant, dismissive...

Taking philosophy seriously means taking your own temperament when reasoning seriously. And the right temperament is not the one on display in this paper.

18 - The Mind as Computer 1

49m · Published 05 May 20:17

Block’s “Psychologism and Behaviorism” has two aims: 1) hone in on the best version of behaviorism about intelligence, 2) show that even this very best behaviorist account of intelligence fails (for broadly ‘functionalist’ reasons).

A terminological disclaimer...

Block’s preferred philosophical terminology is idiosyncratic...

  • Block’s “psychologism” (about cognition/intelligence) = our “functionalism” (about cognition/intelligence)
    • “Let psychologism be the doctrine that whether behavior is intelligent behavior depends on the character of the internal information processing that produces it.”
  • Block’s “functionalism” = our “conceptual functionalism” (about all mental states)

Block’s Neo-Turing Test

Block’s description of the original Turing Test:

  • “The Turing Test involves a machine in one room, and a person in another, each responding by teletype to remarks made by a human judge in a third room for some fixed period of time, e.g., an hour. The machine passes the test just in case the judge cannot tell which are the machine's answers and which are those of the person.”

And for what is the Turing Test a test? Intelligence!

  • Caveat: “Note that the sense of ’intelligent’ deployed here--and generally in discussion of the Turing Test--is not the sense in which we speak of one person being more intelligent than another. ‘Intelligence’ in the sense deployed here means something like the possession of thought or reason.”

So to a first approximation, the conception of intelligence we find embedded in the Turing Test...

i. Intelligence just is the ability to pass the Turing Test (if it is given). (‘crude operationalism’)

Basic problem: Measuring instruments are fallible, so we shouldn’t confuse measurements for the thing being measured.

  • No Turing Test judge will be infallible. Or, rather, we shouldn’t want our test to vary, depending on who happens to be the judge.

Initial solution: Put it in terms of behavioral dispositions instead...

  • You can fail the test in some weird corner case while still having the general disposition to pass the test in most situations/scenarios. A single failure should not be conclusive evidence against a system having real intelligence.

ii. Intelligence just is the behavioral disposition to pass the Turing Test (if it is given). (Familiar Rylean Behaviorism)

  • Basic problem: “In sum, human judges may be unfairly chauvinist in rejecting genuinely intelligent machines, and they may be overly liberal in accepting cleverly-engineered, mindless machines.”

  • Initial solution: Replace the imitation game with a simpler game of ‘simply produce sensible verbal responses’

    • Ex: Compare two responses to the question, “So what have you been thinking about?”: 1) “I guess you could say that I’ve been thinking about everything...or maybe nothing? it’s just so gosh darn boring to be stuck inside not talking to anyone.”, 2) “A contagious symphony waltzed past my window. The third of february frowned while I stung a dagger.”
    • Note: this move away from the imitation game a a simpler “are these verbal responses sensible?” test drastically lowers the bar. But (according to Block) we’ll see that behaviorists can’t even clear this lowered bar.

iii. “Intelligence (or more accurately, conversational intelligence) is the disposition to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.”

Basic problem: The standard functionalist objections to behaviorism.

  • In particular, the ‘perfect actor’ objection (think Putnam’s super-super spartans) shows that a determined deceiver could fool a test.

But! And here’s the crucial point:

  • “As mentioned earlier, there are all sorts of reasons why an intelligent system may fail to be disposed to act intelligently: believing that acting intelligently is not in its interest, paralysis, etc. But intelligent systems that do not want to act intelligently or are paralyzed still have the capacity to act intelligently, even if they do not or cannot exercise this capacity.”

Upshot: The behavioral tests are maybe not necessary for intelligence, but they’re perhaps sufficient!

iv. “Intelligence (or, more accurately, conversational intelligence) is the capacity to produce a sensible sequence of verbal responses to a sequence of verbal stimuli, whatever they may be.” (Block’s Neo-Turing Test for Intelligence)

Perhaps behaviorism gives the right account of intelligence, even if not the right account for other kinds of mental states/properties...

Block against Block’s Neo-Turing Test

Block’s objection to the Neo-Turing Test’s conception of intelligence comes down to a fairly simple claim: we can conceive of a system that has the capacity to produce perfectly sensible outputs, without itself producing those outputs intelligently

On-the-fly reasoning is more sophisticated than a simple lookup table--And it’s that more sophisticated thing that we’re trying to characterize.

The Conceivability Test

  • “The set of sensible strings so defined is a finite set that could in principle be listed by a very large and clever team working for a long time, with a very large grant and a lot of mechanical help, exercising imagination and judgment about what is to count as a sensible string.”
  • “Imagine the set of sensible strings recorded on tape and deployed by a very simple machine as follows. The interrogator types in sentence A. The machine searches its list of sensible strings, picking out those that begin with A. It then picks one of these A-initial strings at random, and types out its second sentence, call it ``B''. The interrogator types in sentence C. The machine searches its list, isolating the strings that start with A followed by B followed by C. It picks one of these ABC-initial strings and types out its fourth sentence, and so on.”
  • “...such a machine will have the capacity to emit a sensible sequence of verbal outputs, whatever the verbal inputs, and hence it is intelligent according to the neo-Turing Test conception of intelligence. But actually, the machine has the intelligence of a toaster. All the intelligence it exhibits is that of its programmers.”
  • “I conclude that the capacity to emit sensible responses is not sufficient for intelligence, and so the neo-Turing Test conception of intelligence is refuted (along with the older and cruder Turing Test conceptions).”

The Upshot: The Cognitive/Rational/Intelligent Mind as a Computer

Our concept of intelligence involves something more than just the ‘external’ pattern of sensory inputs and behavioral outputs. The internal pattern of information processing that produces those inputs/outputs also matters. Those patterns need to be produced in the right way--in a way that looks something like ‘abstract rational thought’ rather than some dumb lookup table.

TL;DR: functional organization matters to intelligence!

Objections to Block

Objection 1: “Your argument is too strong in that it could be said of any intelligent machine that the intelligence it exhibits is that of its programmers.“

Block’s Reply: “The trouble with the neo-Turing Test conception of intelligence (and its predecessors) is precisely that it does not allow us to distinguish between behavior that reflects a machine's own intelligence, and behavior that, reflects only the intelligence of the machine's programmers.”

Objection 3(ish): This is a merely verbal dispute. You insist that the concept of intelligence ‘includes something more’ than mere input/output patterns. But unless you can specify what this extra special ingredient is, you’re just helping yourself to a magical (potentially-spooky?) conception of ‘intelligence’!

Block’s Reply: “...my point is based on the sort of information processing difference that exists.” All you need to grant is that there is an interesting and worthwhile difference between lookup-table algorithms and ‘on-the-fly’ reasoning algorithms. Call it whatever you want. That seems to be something we care about when we talk about ‘intelligence’.

Objection 6(ish): Are you sure what you’ve described is actually conceivable? Don’t you run into a problem where the physical size of your lookalike intelligence would have to be larger than the size of the physical universe?

Block’s Reply: “My argument requires only that the machine be logically possible, not that it be feasible or even nomologically possible. Behaviorist analyses were generally presented as conceptual analyses, and it is difficult to see how conceptions such as the neo-Turing Test conception could be seen in a very different light.”

My Objection (?): Doesn’t “actual and potential bahavior”--the sort of thing that a Rylean logical behaviorist likes--ultimately stem from the pattern of ‘internal’ infromation processing? That is, the two always in fact go together. And so for any postulated “lookalike” intelligence, so long as it’s finite (which it has to be), we can dream up scenarios in which it would malfunction in a telling way, or otherwise fail to demonstrate the full range of cognitive flexibility that a ‘real’ intelligence can exhibit.

A reply (?): Remember that Block’s point is conceptual, rather than empirical. Those two things (input/output patterns vs. information-processing patterns) may in fact be tightly connected. But that tight connection would be explained via the information-processing conception

17 - Thinking and Computation

4m · Published 16 Apr 23:18
*NOT* covering Turing's "Computing Machinery and Intelligence”

16 - Scientific Theories of Consciousness (BONUS)

22m · Published 16 Apr 01:38

1. The Functions of Consciousness

Everyday considerations

Through consciousness, and only through consciousness, are we able to do many of the things we do. Think about when you’re not consciously aware of something vs. when you are.

  • Ex. Unconsciously tapping your foot. Somebody tells you to stop. This brings your tapping to attention. You can now control it.

Conscious attention allows for top-down control, for planning and initiating intentional action.

Empirical findings

(i) Durable and explicit information maintenance

  • “We suggest that, in many cases, the ability to maintain representations in an active state for a durable period of time in the absence of stimulation seems to require consciousness.”
  • Ex. “The classical experiment by Sperling (1960) on iconic memory demonstrates that, in the absence of conscious amplification, the visual representation of an array of letters quickly decays to an undetectable level. After a few seconds or less, only the letters that have been consciously attended remain accessible.”

(ii) Novel combinations of operations

  • “The strategic operations which are associated with planning a novel strategy, evaluating it, controlling its execution, and correcting possible errors cannot be accomplished unconsciously...Such processes are always associated with a subjective feeling of ‘mental effort’, which is absent during automatized or unconscious processing”

(iii) Intentional behavior

  • “A third type of mental activity that may be specifically associated with consciousness is the spontaneous generation of intentional behavior.”
  • Ex. Blindsight patients (subjects who are partially blind, but still can identify, above chance, objects in their blindspot) “never spontaneously initiate any visually-guided behavior in their impaired field. Good performance can be elicited only by forcing them to respond to stimulation.”

We’re looking for the capacity for top-down control, for a central ‘organizer’...

The Global Workspace Theory of Consciousness

“Within this fresh perspective, firmly grounded in empirical research, the problem of consciousness no longer seems intractable.”

The Modular Mind
The mind is comprised of a huge collection of special-purpose devices which operate smoothly in their own domain. A module is a relatively independent information-processing machine. The brain is made up of a hierarchy of modules, organized into major subsystems, like vision, audition, motor control, somatosensory perception, emotion, etc.

Dual Process Theory
The dual-process theory posits that when an organism encounters a cognitive task, it can deploy two kinds of systems to perform that task. These two systems are distinct, both functionally and evolutionarily. What has been called system 1 is fast, parallel, automatic, relatively rigid, and unconscious. System 2 is slow, serial, deliberate, flexible, and conscious. It’s slow, serial, and deliberate since it involves conscious analysis by means of abstract categories and concepts.

The Global Workspace

(i) “Besides specialized processors, the architecture of the human brain also comprises a distributed neural system or ‘workspace’ with long-distance connectivity that can potentially interconnect multiple specialized brain areas in a coordinated, though variable manner. Through the workspace, modular systems that do not directly exchange information in an automatic mode can nevertheless gain access to each other’s content.’

(ii) Top-down attentional amplification is “the mechanism by which modular processes can be temporarily mobilized and made available to the global workspace, and therefore to consciousness.”

(iii) “According to workspace theory, conscious access requires the temporary dynamical mobilization of an active processor into a self-sustained loop of activation: active workspace neurons send top-down amplification signals that boost the currently active processor neurons, whose bottom-up signals in turn help maintain workspace activity. Establishment of this closed loop requires a minimal duration, thus imposing a temporal ‘granularity’ to the successive neural states that form the stream of consciousness.”

(iv) “At least five main categories of neural systems must participate in the workspace: perceptual circuits that inform about the state of the environment; motor circuits that allow the preparation and controlled execution of actions; long-term memory circuits that can reinstate past workspace states; evaluation circuits that attribute valence in relation to previous experience; and attentional or top-down circuits that selectively gate the focus of interest. The global interconnectedness of those five systems can explain the subjective unitary nature of consciousness and the feeling that conscious information can be manipulated mentally in a largely unconstrained fashion.”

The Global Workspace hypothesis: consciousness is constituted by the broadcasting information in a global workspace in a sufficiently-configured system.

15 - Illusionism

26m · Published 13 Apr 09:03

Frankish's starting point: Maybe the hard problem seems unsolvable because it is. That weird-looking thing? Just an illusion! (No wonder you were confused!)

We trade the hard problem for the illusion problem: the problem of explaining why we say things like "consciousness seems weird and out-of-place!"

  • Note: an instance of an easy problem (from Chalmers)

Illusionism: Basic Characterization

When we confront a seemingly-anomolous feature of the world, there are three basic philosophical strategies:

  • Radical realism: Accept the anomolous feature at full strength. Accept any/all consequences that may follow...
  • Conservative realism: Accept a watered-down version that more easily fits into existing conception of the world.
  • Illusionism: Accept there's an appearance that needs to be explained, but only ever appeal to already-accepted features of the world.

Some Analogies: Dennett's 'user illusion' (consciousness as the ultimate 'immersive experience'), impossible objects (Escher drawings), the projection of 'secondary qualities' onto a colorless world.

Two potential sources for the illusion: 1) sensory awareness of the external world, and 2) self-monitoring of our own internal states.

Motivating Illusionism

Frankish argues that illusionism about consciousness is more attractive than the main alternatives...

Argument via elimination...

Against radical realism: We should inherit from science a preference for conservative (rather than radical) theoretical moves.

  • "The principle of conservatism should apply with special force, I suggest, when the pressure for radical innovation comes from a parochial, anthropocentric source, such as introspection." (p. 10)
  • Plus, a modest/conservative approach to consciousness can better explain how conscious states have a causal impact.

Against conservative realism: It's unstable. Either it makes room for something weird and extra-physical (collapsing into radical realism) or it makes do with the same austere metaphysics that full-blown illusionists make do with.

  • "...if phenomenal concepts refer to feels, then the challenge to conservative realists remains. They must either explain how these feels can be physical or accept that phenomenal concepts misrepresent experience, as illusionists claim." (p. 11)

...assisted by two 'advantages' for illusionism

  1. Our claims and beliefs about consciousness must, themselves, be the result of a physical process (if they're to affect the words I type out as I write...)
  2. "In general, apparent anomalousness is evidence for illusion." (p. 13)

Objections to Illusionism

Objection 1 (Denying the Data): We each have direct, immediate access to our own consciousness experiences. Experience our epistemic 'core'--they form the foundation upon which the rest of our beliefs can be built.

  • Frankish reply: direct, infallible 'acquaintance' can't be physically realized in a cognitive system like ours, which has to send signals back and forth across a 'noisy' world.

Objection 2/3 (No appearance-reality gap/Who is the audience?): Talk of an "illusory appearance" presupposes a ) the existence of the appearance, and b) an audience who is under the spell of the illusion.

  • Frankish reply: We can still make sense of misleading sensory information, and having such information being 'overridden' without invoking anything spooky and non-physical.

Objection 4 (representing phenomenality): If there ain't no phenomenal consciousness, where did the concept "phenomenal consciousness" come from?

  • Frankish reply: Not all theories of meaning require the referent of a concept exist. Illusionists should treat "pheneomenal consciousness" the same way we'd treat bad theoretical posits (e.g. "phlogiston") or inherited confusions (e.g. "ghosts" or "spirits").

14 - The Hard Problem

35m · Published 10 Apr 22:59

"Consciousness fits uneasily into our conception of the natural world. On the most common conception of nature, the natural world is the physical world. But on the most common conception of consciousness, it is not easy to see how it could be part of the physical world. So it seems that to find a place for consciousness within the natural order, we must either revise our conception of consciousness, or revise our conception of nature." (p.247)

Easy Problems vs. The Hard Problem

The easy problems of consciousness involve explaining all the complex computational and cognitive processes that are associated with consciousness--things like: rational control of behavior, monitoring internal states, verbal reports of experience, etc.

The hard problem of consciousness comes to this: explaining why all that cognitive processing feels like something. It sure doesn't seem like conscious experiences can be directly read off from any physical/functional description. But so then what is it about our world that makes all that physical activity come along with conscious experience? What is it about our world that makes us non-zombies?

An Epistemic Gap

There are three closely-related arguments which, in their own way, reveal an epistemic gap between the physical and the phenomenal:

  • The Explanatory Argument: Physicalist accounts can at most explain the structural/functional features of the world, but that's a poor match for the thing we're trying to explain, phenomenal consciousness...
  • The Conceivability Argument: There's no internal contradiction in the idea of a zombie world, and so our world and the zombie world must be distinct metaphysical possibilities...
  • The Knowledge Argument: The truths about consciousness are not deducible from mere physical truths...

The general form of these arguments:

  1. There is an epistemic gap between physical and phenomenal truths.
  2. If there is an epistemic gap between physical and phenomenal truths, then there is an ontological gap, and materialism is false.
  3. Conclusion: Materialism is false.

The basic upshot: there's a mismatch in the concepts we use talk/think about consciousness and the concepts we use in physical/functional/structural descriptions. It can then seem impossible to explain the phenomenal facts of the word in terms of the dynamical and structural aspect of the 'underlying' physical world. While we can assert some brute connection between the two kinds of facts/descriptions, that won't tell us why that brute connection exists.

Materialist Accounts

Type-A Materialism denies the existence of epistemic gap between the phenomenal and the physical. This can take two forms: conceptual functionalism ("our phenomenal concepts are ultimately just functional concepts") or eliminitivism/illusionism ("our phenomenal concepts point to something over and above the basic cognitive functions of the brain, but nothing like that exists"). Both versions simply deny the existence of the hard problem, thinking that once we've solved the easy problems, there's nothing left over to explain.

  • Basic Problem: Seems to deny the coherence of scenarios that are clearly coherent (zombies, inverts). They need to provide some strong positive reason to deny the coherence of these seemingly-coherent scenarios, but that seems hard to come by.

Type-B Materialism (empirical functionalism) accepts the existence of an epistemic gap, but denies that it entails an ontological gap. That is, while there's a difference in the way we conceive of cognitive functioning vs. phenomenal experience, the two can nevertheless be revealed by science to be one and the same thing.

  • Basic Problem 1: Other successful empirical reductions worked differently. They worked by explaining some high-level functional features in terms of other lower-level functional features. And so the analogy to other empirically-supported identities fails.

  • Basic Problem 2: The move to a brute phenomenal-physical identities is ad hoc and unmotivated. It's the move you'd make to save materialism--but why save materialism? There should be some positive motivation.

Type-C Materialism accepts the appearance of an epistemic gap, but holds out hope for that gap to be closed by some future theoretical progress.

  • Basic problem: "epistemic implication from A to B requires some sort of conceptual hook by virtue of which the condition described in A can satisfy the conceptual requirements for the truth of B. ... Once we accept that the concept of consciousness is not itself a functional concept, and that physical descriptions of the world are structural-dynamic descriptions, there is simply no conceptual room for it to be implied by a physical description" (260)

Non-Materialist Accounts

Type-D Dualism (interactionism) holds that phenomenal aspects of the world causally interact with the physical aspects of the world, despite having it's own separate kind of existence.

  • Basic Problem: The causal closure of the physical. That is, physics tells us that physical effects have physical causes. There's no room for some external influence on physical systems. All physical activity is explicable in the

  • Chalmers Reply: Some interpretations of quantum mechanics make room for observation playing a key role in triggering the collapse of the wave function. More work needs to be done, but it's not incoherent for physical process to be tangled up with the experiences of observers.

Type-E Dualism (epiphenomenalism) "holds that phenomenal properties are ontologically distinct from physical properties, and that the phenomenal has no effect on the physical" (263)

  • Basic Problem: Epiphenomenalism makes consciousness into this dangling bit of reality. Why is it there? Note that you and your zombie twin, on this view, behave the exact same way, for all the same physical reasons, it's just that you happen to enjoy this extra phenomenal thing for some unexplained reasons. Why not just think you're a zombie? That we're not zombies is this unexplained 'miracle'.

  • Chalmers Reply: You and your zombie twin actually have different beliefs, because your beliefs about consciousness are tied directly to your first person experience, whereas your zombie twins are not.

Type-F Monism (Russellian Monism) holds that the basic building blocks of the universe have some mental aspect to them, which is responsible for consciousness in our universe. While physics captures the the structure and dynamics of these basic building blocks, it says nothing directly about their intrinsic nature.

  • Basic Problem: Our only conception of the phenomenal comes from our own first-person experience, and we have no clue how unified points of view emerge from all these little bits of mentality sprinkled throughout the universe. (The combination problem)

13 - Inverted Spectrum

37m · Published 09 Apr 22:37

Phenomenal Character

To know that some organism or system id p-conscious does not yet tell us anything about the specific, fine-grain character of their phenomenology.

There are different phenomenal modalities (types, kinds).

  • Ex: visual experience, bodily awareness, pain experience, emotional experience
  • Note: there may phenomenal modalities with which humans are not familiar (Ex: Bat Sonar)

We can describe a subject's phenomenology at differing levels of specificity. (i.e. "color experience" vs "red experience" vs "crimson experience")

Two Kinds of Functionalism

Conceptual functionalism (Analytic functionalism, a priori functionalism): That mental states are individuated by their functional roles can be read off from the way we talk/think about the mind.

Empirical Functionalism (Psychofunctionalism. a posteriori functionalism): That mental states are individuated by their functional roles is a finding of empirical science. It could have turned out differently, but our best science says that phenomenal differences need to be ground out/explained in terms of functional differences.

Two skeptical challenges to functionalism...

Absent qualia: "I can conceive of a functional duplicate that is not p-conscious..."

Inverted qualia: "I can conceive of a functional duplicate that has phenomenal-green experience where I have phenomenal red experience..."

Under conceptual functionalism, such cases are supposed to be incoherent.

If there were real cases of inverted qualia, empirical functionalists would look to vision science for a functional explanation of this difference...

Nita-Rumelin's "Pseudonormal Vision"

Pseudonormal vision: "red" cones and "green" cones swap pigments, so what would have been a "red" signal becomes a "green" signal and what had been a "green" signal becomes a "red" signal...

These cases aren't just conceivable, they really exist! Or, at the very least, our best science seems to take the hypothesis of red/green phenomenal inversion seriously.

Either empirical functionalists would need to rule out this hypothesis a priori, which would turn them into conceptual functionalists. Or they would have to distrust empirical science, which which violate the "empirical" part of "empirical functionalism".

This is all supported by the "plausible prima facie constraint": "No hypotheses accepted or seriously considered in colour vision science should be regarded according to a philosophical theory to be either incoherent or unstatable or false."

Kernion Courses has 13 episodes in total of non- explicit content. Total playtime is 6:28:13. The language of the podcast is English. This podcast has been added on November 23rd 2022. It might contain more episodes than the ones shown here. It was last updated on August 7th, 2023 01:04.

Similar Podcasts

Every Podcast » Podcasts » Kernion Courses