Science and its Delusions

I’ve decided it’s time to talk about a few things: such activities as riding a bike, driving a car, and typing a blog.  To me they are helpful examples to help us examine scientific explanations, the limits of science, and the delusions of some scientists.  This is an area that I think about from time to time, usually after reading an article that makes rather bold statements.  However, occasionally my reaction is rather more pointed.  Like a red rag to a bull, it took just a few lines of an article on neuroscience to provoke me.  (By the way and before you read on, the only really technical stuff is in the next two paragraphs.  The rest of this essay is in easier and possibly more enjoyable language.)

Here we are, early into a discussion about the latest developments in neuroscience: “the theory of mind we call carry around with us and use every day has no basis in what neuroscience—Nobel Prize winning neuroscience–tell us about how the brain works. Neuroscience has revealed that the theory is quite as much of a dead end as Ptolemaic astronomy. It’s been around for such a longtime only because it was the predictive device natural selection came up with, in spite of being fundamentally mistaken about how things were really arranged.” [i]  Did you like the bit about ‘Nobel Prize winning neuroscience”?  That was guaranteed to get me going, even before I read the rest!!  I guess that was exactly what the author intended.

The core of the paper described how researchers investigated the behaviour of the brain, in a study of rats, (apparently humans aren’t so keen to have continuous brain scanning taking place to monitor what’s happening as they wander around!).  The researchers:

“correlated specific locations of the rat and landmarks in its cage with specific neuronal circuits distributed around the entorhinal cortex. Then they could interpret the firings as a correct representation, a map for them, of where the rat is, where it’s going and what’s in the cage. They could read off the rat’s location without watching the rat at all! But note, neither the rat nor any part of its brain constructs a map from the neural firings. It’s not giving the neural circuits content, treating them as containing statements about where the rat is. Experimenters decode firing patterns. Rats don’t. They ‘re just driven by them. Firings are all the same, all over the brain—rat and human. What makes some neural firings into location-recorders and other firings into odor-recorders is just their place in the causal chain, the pathway to further behavior. Rats choose among alternative pathways as a result of neural firings produced by previous experience. But it’s not because these neuron circuits contain statements about anything. The neurons don’t represent to the rat the way it’s world is arranged. So they don’t work any thing like the way beliefs have to work, pairing up with desires via shred content about means and “ends. That goes for our neuronal circuits, assemblies, modules, region, too.”[ii]

Perhaps we can make sense of all this by using my suggested examples.  I’d like to start with an easy one: riding a bike.  Do you remember learning to ride a bike?  For most of us (before the invention of training wheels; how embarrassing was that idea!), learning to ride a bike meant pedaling along while an adult (usually an unfortunate and tiring father) ran alongside, holding the bike vertical.  At some point, and often after a few failures, the holding hand was withdrawn, and you were cycling.  How did that happen?  Can you explain riding a bike?

Some people like to say riding a bike is an example of ‘tacit knowledge’.  Most knowledge is ‘codified’, written down, but tacit knowledge is difficult to transfer to another person by either writing it down or talking about it.  You just ‘know’ from experience.  I think that explanation is inadequate.  Riding a bike is rather like the way the rats described above go around a cage.  Your brain, your neurons, have encoded a complex set of processes, and, on the bike, without you doing any thinking, a set of neural ‘firings’ take over, and you are riding based on what those neurons have ‘learnt’ from experience.  We ride without thinking, knowledge free.  Of course, and before you get excited about this, that doesn’t mean we aren’t constantly examining our environment, making choices.  But the physical act of riding, it’s embedded, thought free.

Let’s take a slightly more complex example.  In the bad old days, a person learning to drive a car would be using a manual vehicle.  One of the most important challenges was learning when to change gears, and how to ensure the engine remained connected to the drive train.  To begin with, there would be embarrassing ‘kangaroo hops’ when in the wrong gear, and jerks and equally upsetting stalls when failing to disengage the gears on stopping the car.  After a while, most people stop having to think about that stuff.  Just as well, as there is plenty else to keep you busy.  Eventually, the sound of the engine tends to nudge you into changing gear without any thought involved.  If you learnt on a manual car, for many drivers the change to an automatic still leaves that engine hum triggering those neural firings, and, without conscious reflection, you find yourself pressing down harder to accelerate, or even to change down.  Another example of embedded learning, probably through embedding neuronal pathways.

This car driving example is more complex, however.  Let’s go for a drive.  I’m off to buy some milk, which I forgot to do yesterday.  I am fully awake, after my breakfast coffee, and I want to get over to the supermarket before it gets too busy.  It’s an automatic car, and so my attention is on the road, which meanders a little, and on any other traffic that appears.  I reach the junction where Balsom Road meets Transou.  I was going to turn right, but that means two traffic lights fairly early on, or I could go straight ahead (crossing two lanes of traffic) and then turn right after a short while to confront one set of lights.  I’ll go straight.  No!  A car is coming from the right.  I wait, and at the last minute I turn right instead.  Did I “choose among alternative pathways as a result of neural firings produced by previous experience”.  Yes and no.  The way I drove the car was partly based on ‘learnt’ neural pathways (or at least I am willing to believe so).  At the same time, I made a conscious, rather serendipitous choice, based on my beliefs (I am an autonomous agent in the world, capable of acting as I choose), my desires (specifically, in this case, to get milk) and assessing alternatives means (turn right or go straight), weighted by my impatience.

What am I trying to say?  We learn.  Learning is essential to living.  It starts with simple things, making sense of sense stimuli, so we can navigate our bodies in the world.  We learn a language, and in so doing develop a powerful tool to communicate, and in so doing refine our models of the world around us.  We learn to ride a bike, and, since it is almost impossible for most of us to explain what we are doing, we let that practice be developed by experience, quickly stopping any more thinking about what’s going on (if you think too hard about how you are managing to ride a bike, I suspect you are in danger of falling off!).

The crux sits in that funny word ‘models’.  Artificial intelligence experts believe that if we enable a computer to develop models, it will be able to think like us, only faster and more precisely.  I agree, but only as far as it extends to rational thought.  A computer could have driven my car to the supermarket (provided I asked it to do so), but it wouldn’t have impulsively changed its mind at the junction of Balsom and Transou.  What I did wasn’t logical: it was a “oh, f*** it, whatever” moment, an emotional, serendipitous choice, a choice that might be quite different in exactly the same set of circumstances the next time, even a few seconds later.

This sits behind one of the major challenges in trying to make cars more ‘intelligent’, and the vexed issue of how to program a car to deal with unexpected circumstances, especially those that involve human beings.  I am sure you have read about the difficulties of providing rules about what to do when the car’s computer is faced with the choice between hitting one pedestrian or another (AI fanatics love to ramble on about determining the relative value of life between an older person and a child, all to no entirely satisfactory conclusion).  For humans in those situations, they tend to react less logically, spinning the wheel, stamping on brakes, screaming, and all the while allowing one major desire to influence everything else: “save myself”.  Car passengers beware: at the end of the day, this tends to be the most important emotional desire that influences the driver’s actions, and it is unusual that a driver has enough time and resilience to think about everyone else in the car when a fatal accident is about to happen.  It’s me first!

I am quite certain that learning lays down preferred neural pathways, and that events can trigger neural firings that shape our responses.  However, where some scientists lose me is the confusion between causes and consequences, theories and observations.  I would interpret all that funny stuff in the opening quote as being about learning, models created through experience, models that are there to enable us, but not to direct us.  To observe that neural firings indicate what is happening is not because the firings dictate what we will do, but rather they are the supporting processes that allow us to respond.  Do these scientists really believe we are driven by electrical impulses alone, with no thought or choices involved, other than as subsequent justifications?

OK, getting grumpy, so it must be time to talk about typing.  I am typing on my computer right now.  I never learnt to touch type (I started typing on a small manual typewriter when I began my undergraduate studies, one slow finger at a time).  Today, I know that I can take my eyes away from the keyboard and type reasonably accurately with four fingers.  I did it just now, error free, for a few words.  However, I still prefer to look between the keyboard and the screen, often at the cost of a strained neck after a long day at my desk.  Why?

I behave this way because I hold on to that theory of mind that ‘Nobel Prize winning neuroscience’ wants to eliminate.  In the article, the author follows the usual trick of overstating the model to be set aside, to make it seem obviously wrong.  Let me quote the theory of mind they want to set aside: it is one where: “people’s actions are caused by choices made rational in the light of their beliefs and their desires”. [iii]  Whoa there!  People’s actions are caused by choices?  Well, we already have explored that when I ride a bike, some of my actions are not caused by choices, but are influenced and shaped by embedded heuristic neural pathways: no conscious choice involved. (I hope you liked my throwing in ‘heuristic’ there, but it just sounded right!) [iv]  But others, yes, for sure, other actions are the result of choices: I like that model.

“Choices made rational in the light of their beliefs and their desires”.  That’s a clever, if unintentional, twist.  It implies the choices come first, the very argument the neuroscientists are trying to make, but that we then make up an explanation, by referring to what has been decided to some set of (irrelevant) desires and beliefs.  Time to test that proposition.  Confused?

I am still typing at my computer.  When I write a blog, I tend to type for a bit, and then I switch over to the latest detective novel I’m writing (typing).  Did you see what just happened?  I typed ‘writing’, and then thought I should add ‘typing’ in case anyone thought I had abandoned the computer for the traditional lined notebook and pencil.  Incidentally, typing ‘typing’ is a fascinating exercise for me as I type the word with a careful look at my fingers, as the ‘t’ and the ‘y’ are next door to each other on the first alphabetic line of a qwerty keyboard.  Why fascinating?  Because I often hover, momentarily, over which finger to use (with two of the four to choose from!).  Am I making a choice, or are my neurons making me do this??

Got a bit lost there, which is actually the point.  When I am typing fiction, I often stop, rethink, go back, change, hesitate, and show all the signs of an increasingly grump, forgetful, and confused writer.  Some of the time I don’t know how the details of the plot will unfold.  The other day, I had my cast of investigators wandering around part of a wood looking for clues.  When I started, I didn’t know what they would find.  I was rather interested in the interaction between my leading investigator and a new character in the story.  However, I decided they should find something, and they did.  It took two days before I could see how I could weave that find into the evolving story.  Then it was about emotions, creative thinking, hardly rational at all.

 

Human beings are like scientists.  We observe, try to make sense of what we see, and develop models to help us predict what will happen the next time.  We develop ideas and try them out, looking for empirical support.  But we also know that much of what we infer or develop comprises frameworks and hypotheses that are ways of making sense, but not reality itself.  Scientists are like human beings.  They do the same, and their findings are no more than their frameworks and hypotheses, ways of making sense of a reality that is always held away from us, ‘perceived’ through intermediary senses and organised by models and assumptions.  Time for some humility.  Neural firings don’t ‘cause’ behaviour, nor are our brains no more than squishy computers.  In case it’s not clear and you want to know what I believe, I don’t believe we can ever create a rational computer that can experience emotions and act impulsively.

Sometimes writers make sure they are one step away from what they describe.  Alex Rosenberg manages to maintain that little bit of skepticism all the way through (on from that lovely phrase ‘Nobel Prize winning neuroscience’).  For once, I’ll let someone else have the last word:

“What does all this mean? Watson may beat us at Jeopardy, but we are convinced we have something AI will always lack:  We are agents in the world, whose decisions, choices, actions are made meaningful by the content of the belief/desire pairings that bring them about. But what if the theory of mind that underwrites our distinctiveness is build on sand, is just another useful illusion foisted upon us by the Darwinian processes that got us here? Then it will turn out that neuroscience is a far greater threat to human distinctiveness than AI will ever be”.[v]

Yea, sure.  Maybe one day it will turn out that pigs can fly!  Oops.  Just made sure I had the last word.  Sorry Alex!!

 

[i] Is Neuroscience a Bigger Threat than Artificial Intelligence?  Alex Rosenberg, 3am Magazine, 2 November 2018.  I liked “shred content”, but? All those grammatical errors are in the original article, and you’ll spot more, I’m sure!

[ii] Ibid

[iii] Ibid

[iv] According to that mine of information, Wikipedia, an heuristic is “approach to problem solving, learning, or discovery that employs a practical method, not guaranteed to be optimal, perfect, logical, or rational, but instead sufficient for reaching an immediate goal.

[v] Ibid.

Recent Posts

Categories

Archives