Pathways to PhilosophyKindle eBooks by G Klempner

on this page

Or send us an email

Application form

Pathways programs

Letters to my students

How-to-do-it guide

Essay archive

Ask a philosopher

Pathways e-journal

Features page

Downloads page

Pathways portal

Pathways to Philosophy

Geoffrey Klempner CV
G Klempner

International Society for Philosophers
ISFP site

Home   George 1   George 2   George 3   George 4   George 5   George 6   George 7   George 8   George 9   George 10   George 11   George 12   George 13   George 14   George 15

pathways (letters)

8 November 1997

Dear George,

Thank you for your letter of 30 October. I am pleased to see that you are prepared to have a go at the other minds problem. It's a tough nut to crack! But let us see.

'I have a mind but nobody else does. The problem here is that you too, (and he, and he, and he) are equally convinced that you are the one in that position.' — If I have a mind and you do not, then the sounds that come out of your mouth when you express your perplexity about the other minds problem are just sounds, not the expression of conscious thoughts. So it seems the inference you want to make is not possible here!

On second thoughts, let us look more closely at the idea of a 'mind-less robot'. Let's say you and I meet. There are two distinct possibilities:

a. Aliens on the planet Zog have created a simularcum of a human being, which they control by radio. This splits into two further alternatives:

a'. An alien called Zpatkl has been assigned to control the George robot. In that case, in conversing with 'George' I am in communication with another mind. The George simulacrum is in reality the 'mask' used by Zpatkl. (Perhaps these aliens are frighteningly ugly, so this is the only way they could make contact with us.)

a''. The aliens have placed a tape recorder in the George simulacrum with pre-recorded stock phrases, like the doll that says, 'Mummy', 'Milk!', 'I love you' when you press different buttons. Only in this case, the vocabulary of stock phrases (and the statements or questions that trigger them) is sufficiently large to fool me into thinking that the things I say are receiving a genuine response.

— In both a' and a'', there are circumstances under which I might discover 'the truth'. Any scepticism I might entertain regarding the true significance of our 'conversations' is merely of the inductive variety.

b. You are not a simulacrum from the planet Zog. You are a flesh and blood human being. There is just one thing you lack: the stuff of consciousness. All is darkness inside. On this hypothesis (which is a corollary of the version of mind-body dualism known as 'epi-phenomenalism') our possession of a brain and nervous system is all that suffices to account for the things we do, the noises that come out of our mouths etc. Only — in my case at least — I know that there is something 'extra', subjective mental stuff as well as objective physical stuff. For all I know, you and the other human beings I encounter are mere zombies.

This is the killer. The scepticism here is not inductive: nothing I could ever discover or experience would suffice to support or refute this form of scepticism, for, by hypothesis, everything that George with an 'inside' would say and do, George without an 'inside' would say and do also!

— Now I can see a possible application for something like the reverse of the the argument you give. I, as a believer in epi- phenomenalism, claim that I have something extra 'inside' that a being materially indistinguishable from me (molecule for molecule) might conceivably lack. Whether a living human being satisfying such and such a material description has something extra 'inside' is a brute, inexplicable contingency. The strange thing about this zombie 'double' of GK is that, having a brain indistinguishable from mine, it professes to be an epi- phenomenalist, is deeply puzzled about the other minds problem. You get the picture! (There are some missing steps in the argument to fill in.)

This argument, assuming of course that it is valid — something you might want to question — is tantamount to a reductio ad absurdum of epi-phenomenalism, and consequently a reductio ad absurdum of scepticism concerning other minds based upon espousal of the epi-phenomenalist theory. The only thing to add is that there is no alternative philosophical basis for a non-inductive scepticism concerning other minds. (After writing this, I remembered that there is a version of this argument in Unit 3, First Dialogue!)

In a way, the response to b. says in effect that 'none of us has a mind' in the epi-phenomenalist's sense. This leaves Cartesian interactionism still in the running; but that theory is capable, in principle, of being refuted by empirical investigation (we haven't got there yet!).

Now, I understand your last remark about the motor car as a way of expressing a preference for epi-phenomenalism over inter-actionism. So if my argument is valid — or could be rendered valid with the addition of suitable extra steps — you have a problem.

Regarding the first of your two remarks, I am not so impressed by the fact that human beings are grown whereas robots are made, as I am by the fact that human beings acquire their character and experience from living a life: being born, being dependent on a parent, growing up. Perhaps the best way to manufacture robots (or 'androids') would be to grow them in some way; while, as you concede, cloning human beings is a form of manufacture.

In the film Blade Runner (very loosely based on the science fiction novel Do Androids Dream of Electric Sheep? by Philip K. Dick — a favourite sci fi writer with philosophers, incidentally), the androids are physically indistinguishable from flesh and blood human beings. The only significant difference is the way their minds are 'made'. To detect an android, you have to subject it to psychological tests. The minds of androids are effectively cobbled together from copies of human memories. The more advanced versions don't even know they are androids, yet in reality (as a skilled tester is able to discover) their characters lack a crucial dimension. They have a form of 'consciousness' but they are not whole selves. — Arguably, the most skilled programmer of android 'minds' can only ever mimic a human character. There is no 'programmable' substitute for real experience.

Yours sincerely,

Geoffrey Klempner