Skip to content
payday

If we have free will, does Mr Data?

Geoff Arnold posts this:

There’s a nice review of books related to the “free will” debate over at the Financial Times. If you’re unfamiliar with the radical findings of Libet et al, you should check it out.

I used to be a philosophy student. Lots of things moved me away from it (like “what are you gonna do, open a philosophy store?) but among them was the realization that while the arguments were often kind of fun, a lot of them were essentially silly — they depended on definitions carefully constructed to allow for infinite disputation, but which, on examination, don’t actually offer any insight.

Any discussion of “free will” is an unendingly productive source of silly disputation.

First off, let’s start with the notion of “free will,” quoting from the FT article.

“If I had free will, I would choose to be funnier. I would choose always to have the right witty riposte ready to disarm adversaries and delight friends. But sadly, it is not so. My lot is for the same lame old gags to hobble out whether I will them to or not, like embarrassing aunts at a wedding.”

This argument (along with Scott Adams’ continual maundering about it) are based on a notion of “free will” that requires noncausal omnipotence. Look at the example in the Financial Times article: “If I had free will, I’d choose to be funnier,” etc.

Fine. If I had free will, I’d choose to speak fluent French, have ten million dollars, be able to float in mid-air, and heal Cathy Seipp’s lung cancer. If our definition of “free will” means “able to choose to do anything, in violation of physical law, conservation of mass-energy, and temporality” then it’s no surprise we don’t seem to have free will. (It does seem to be a lovely example of a straw man, though.)

On the other hand, one might “choose to be funnier” and, with some effort, succeed. Does the fact that one might, by practice, become funnier, or become fluent in French for that matter, then prove Cave wrong?

Whatever he’s talking about, I don’t think it’s what most people mean by “free will.”

The second point (the one Geoff calls out explicitly) is the neurophysical one (again quoting from the FT article):

But of course we have free will, you might be thinking. You could prove it by, for example, choosing to raise your arm at some point in the next five seconds. Go on then. Done it? There, that was easy. Of your own volition, at the time of your choice, you moved your arm: QED.

But the American neuroscientist Benjamin Libet has shown that before every such movement, there is a distinctive build-up of electrical activity in the brain. And this build-up happens about half a second before your conscious ”decision” to move your arm. So by the time you think, ”OK, I’ll move my arm,” your body is halfway there. Which means your conscious experience of making a decision – the experience associated with free will – is just a kind of add-on, an after-thought that only happens once the brain has already set about its business. In other words, your brain is doing the real work, making your hands turn the pages of this magazine or reach over for your cup of tea, and all the time your conscious mind is tagging along behind.

Oddly, it seems that the physical action starts before you are “conscious” of having decided to take the action. Fascinating observation, but what does it have to do with “free will”? Let’s say I reach over to pet the cat next to me on the bed. First I wasn’t petting him, then I wrote this, then I … reached over to pet him. There’s no question I “chose” to pet him before I actually performed the act, because I made the decision, then wrote about it, then did it. If you were scanning my brain, I’m perfectly comfortable with the idea that there would be brain potentials matching the motion, followed by the motion. But then I hadn’t been patting the cat before I decided, then I patted the cat … and threw in some neck scratching as well.

So why are the brain potentials before moving not part of “my” “choice”? Is the process that verbalizes or conceptualizes “I want to pet the cat” “me” but the rest isn’t? Sounds to me like someone thinks there’s a special ghost part there and is separate from the physical.

Could someone, reading this, predict that I’d scratch the cat’s neck before hand? I doubt it, and there are a dozen possible actions I might have taken as an example of an unpredictable action. I might have blown my nose, for example — and might need to momentarily.

Is it free will?

If not, how can you distinguish it from free will? Let’s assume, for argument’s sake, that humans do have whatever it is we call “free will.”

Let’s consider the interesting notion of a “philosophical zombie“:

A philosophical zombie or p-zombie is a hypothetical being that is indistinguishable from a normal human being except that it lacks conscious experience, qualia, sentience, or sapience. When a zombie is poked with a sharp object, for example, it does not feel any pain. It behaves exactly as if it does feel pain (it may say “Ouch!” and so forth), but it does not actually have the experience of pain as a person normally does.

The notion of a philosophical zombie is mainly used in arguments (often called zombie arguments) in the philosophy of mind, particularly arguments against forms of physicalism.

P-zombies show up (either explicitly or in some kind of drag, like Searle’s “Chinese room”) in a lot of these discussions. Here, let’s consider a p-zombie, an android like LCDR Data in Star Trek. He has many human characteristics: he’s modeled on a human in external features (and he is “fully functional“, we’re told.) But he’s a machine, a mechanism. Everything he does or says is the result of some collection of interactions in his “positronic brain.” Since he’s a kind of p-zombie, let’s abbreviate this to his “p-brain.”

With all due respect.

It would seem impossible for Data to have “free will.” Everything he does is determined by the preceding state of his p-brain, and could, in some abstract sense, be predicted by knowing his current state and whatever stimuli he’s receiving at the current instant.

But then … absent some supernatural entity, a “soul”, isn’t this just as perfectly a description of a human?

It doesn’t look good for free will at that point, but then let’s look at our LCDR Data, and look into his “p-brain”. We know it’s a very complicated system, much more complicated than any computer we can build (how many computers could respond positively to a pass from Denise Crosby?)

One of the really major advances in knowledge of the last fifty years is the realization that perfectly deterministic systems can be unpredictable. These systems are called chaotic, and have the peculiar property that very very small differences in initial conditions lead to wioldly different outcomes. In sufficiently complex systems, like the weather, this “sensitive dependence on initial conditions” is such that a complete prediction of the weather’s exact behavior would require literally computing the exact state of all the molecules making up the whole system. In other words, if you want to predict the weather, you have to completely simulate the whole system. Inaccuracies in the initial conditions, or approximations in the computation, will inevitably cause your prediction to diverge from reality.

But then, that means the weather, or a similarly complex system, is unpredictable, at least in the special sense that its infeasible or impossible to compute a prediction of the behavior of the system.

It’s been a long digression, but with a point: I want to ask the question “what is the difference between being ‘merely’ unpredictable, and ‘free will’?” If, as we must suppose, LCDR Data our p-zombie android is a sufficiently complicated system to be sensitively dependent on initial conditions, then his behavior is necessarily going to be unpredictable, not just in the sense that we don’t know what he’ll do next but in the very much stronger sense that its infeasible, impossible, to simulate him and predict what he’ll do next.

I propose that this unpredictability is indistinguishable from what we would call “free will.” Nothing we can do will tell us ahead of time what Data will do. If he were to reach over to pet his cat Spot, we can’t tell whether that is any more, or any less, an exercise of free will than when I petted Radar a few minutes ago. And that is the root of why it’s an essentially silly question: I don’t think the concept of “free will” can be defined well enough to be able asnwer the question. What we can say is that both people and p-zombie androids can entirely plausibly be unpredictable by any computation or mechanism; that, it would seem, is as good a definition of “free will” as we could hope for.

{ 2 } Comments

  1. Geoff Arnold | 2007-Mar-24 at 17:46 (@782) | Permalink

    All very fine and good, Charles. So why were the individuals studied by Libet and others so surprised (indeed angry) about these findings? The most plausible reason (which is confirmed by many other studies and surveys) is that most people are in fact good, old-fashioned Cartesian dualists. They believe that they have souls (spirits, purushas, etc.) that inhabit their bodies but are essentially independent of them. Furthermore this soul is seen as the source of individuality, including character, memory, emotion, volition, personality, etc. This is seen clearly from the way that life after death is described, but also arises in such common beliefs as spritual possession, transmigration, out-of-body experiences, and so forth.

    For the believer in souls, your “whole system” stance doesn’t work. For them, the spirit proposes, the body disposes: the idea of preconcious body-based volition is incoherent.

    Yet another instance of the essentially counter-intuitive nature of science, eh?

  2. Charlie | 2007-Mar-24 at 18:03 (@794) | Permalink

    Well, yeah, but so what? Einstein wasn’t thrilled with probabilistic models of quantum mechanics, but they didn’t stop worked because they were disturbing.

    Is the question whether there is or is not “a soul”, or whether or not there is something like “free will”? I kinda thought this one article was about free will.

    Cave’s (and Adams’) definition of “free will” seems to require such a degree of “freedom” as to form a straw man: so absurdly strong as to be vacuous, and one that implicitly presupposes that free will requires a non-physical source for the freedom.

    None the less, people sure seem to be exhibiting free will, and the notion that the nervous activity required to move not being part of that will, but the thoughts about that motion being part of the will, is just Cartesian dualism in mechanistic drag.

    So what if there is a physical explanation for apparent non-determinism, choice, and “free will” in an apparently deterministic system?

Post a Comment

You must be logged in to post a comment.