I was reading about ‘self-driving cars’ the other day. Today, the clever technologies being installed in cars are certainly very helpful. My wife has a car with all the older but invaluable systems, such as anti-lock braking system (ABS), air bags, and an electronic fuel injection system. However, two years old, it also has collision mitigation braking system (CMBS), a road departure mitigation system (RDM), lane keeping assist system (LKAS), driver seat memory, rear view and blind spot cameras.[i] Her iPhone automatically connects into the car’s hands-free system. Snazzy! There’s more, but it becomes boring to list all these clever little features. More to the point, however, she still drives her car. Obviously, it is neither a self-driving car, nor an ‘autonomous vehicle’. Despite all the hype, they are still a few years away.

I suppose you are going to say I am quibbling about language, but there is a world of difference between an autonomous vehicle and a self-driving car. As far as I can tell, the self-driving car is a figment of the imagination of science fiction writers. We don’t want self-driving cars, because we don’t want cars with selves that just decide, on the spur of the moment, to go off somewhere. Or get bored, and sulk. Or like Henry the Green Engine, decide they won’t go out when it’s raining because that would spoil their nice bright shining paintwork!

We might want a car we can program (instruct) to take us to a specified location, leaving the operation of the vehicle to smart technologies. As I understand the word, an autonomous vehicle, once given directions, would operate using a number of (very sophisticated) feedback and monitoring systems; but, I would assume, and certainly hope, with the passengers still able to take over certain controls, change destination, and similar commands: not wholly autonomous, in other words. It’s possible, and we might travel like that in the foreseeable future.

When I was reading about such vehicles, I discovered several synonyms for an autonomous vehicle: a driverless car; a self-driving car; and a robotic car; and, in the military, which naturally has its own way with words, an unmanned ground vehicle! Words are important. For the same reason, I am not interested in self-driving cars, and suspect we never will be, nor am I interested in ‘driverless’ cars. Surely, if they are driverless, they will remain immobile!! A closer concept might be a “programmable vehicle capable of sensing the environment and navigating to a chosen destination without human operation”, but the driver would still choose the destination, of course. And monitor what happens, I would hope.

We can get some insight into the future of cars by examining the very interesting example of drones, which have been flying without a pilot inside for some time. The general terminology for such things seems to be ‘Remotely Piloted Aircraft’ (RPAs), which makes it clear that drones are under the direct control of human operators.[ii] However, there is said to be a great deal of interest in ‘autonomous’ military drones: these would be “capable of understanding higher level intent and direction”. Bad use of words: surely these would be ‘self-driving drones’.

Would we ever contemplate military drones flying around, able to decide what should be attacked and where, without a remote human pilot! It is easy to imagine such a drone at work: Observe: several vehicles on the ground, moving together; compare to stored images: military; region: close to capital city; operational procedure: implement ground attack. Shame it was a film set that was blown up! Even the armed forces aren’t that crazy, or are they?

There are autonomous aerial military devices, of course. They’ve been around for a long time. One of the more interesting examples was the German V-2 rocket. Fired from a launching pad on the French costs, this ‘flying bomb’ was a missile, its flight describing a parabolic curve. It would be aimed towards the UK in the general direction of London.[iii] But, it was truly dumb. At some point, irrespective of where it was, the engine would cease operating, and the rocket would fall. On to an empty field, an army weapons store, or a church during a morning service.

Now, or course, we are far more sophisticated. An intercontinental missile can be fired with a programmed target identification system. It will use GPS and visual identification systems to find and lock on to its target. With the latest weapons, a remote operator can see what is happening: the path can’t easily be changed, but, if the command comes in time, the missile can be aborted, and blown up in the air. Of course, if anything goes wrong with the abort system … but at least it is not a wholly autonomous system.

All this is by way of an introduction to the broader theme of artificial intelligence (AI).

Discussion of the importance, scope and transformative power of artificial intelligence has been going on for years, decades actually, as I can remember reading about the topic way back in the 1980’s. As a result, any clear meaning the term might have had has been lost in a welter of claims, imaginings and fictions.[iv] Fortunately, the more level-headed leaders in the IT industry can still provide a sober and helpful summary. Here is Safra Katz, who is the CEO of Oracle: “now, the underlying computing capability is much faster, meaning it can crunch through huge amounts of data. And the software technology is far more advanced than it was. The systems can not only augment decisions, but can make them better and faster — freeing up employees and consumers to do more interesting things. As your car drives you to work, you can be reading your briefing for your first meeting, and your car may very well be a better driver [road user] than you are … The main task of AI is to improve decisions”.[v] I am certain she meant “having programmed your car to take you to work …” AI augments decisions, faster.

To explore AI today, let’s use an example. The interpretation of MRI scans is a tricky business. The images are extremely complex black and white pictures with subtle grey-scale variations. Modern MRI results may comprise several scans of the same area, at different depth or angles. Used to discern cancers, for example, their interpretation requires a specialist to review the images. While the results are often very helpful for larger malignancies, the use of MRI technology often leads to ‘false positives’, suggesting a likely cancer that turns out not to be the case on further and usually invasive investigation. The underlying process is simple: the MRI provides the data, and the specialist then looks at and assesses the scans.

Can this be done better? Yes, it can, and this is where AI systems step in. Radiomics is an emerging analytics approach, which assess multiple images at high speed, extracting thousands and even millions of pieces of information to determine the existence of a benign or malignant tumour. Its application to detecting breast cancers has proved very promising, with detection rates and differentiation between benign and malignant tumours achieving success levels of over 90%, markedly improving on previous analyses by skilled practitioners.[vi] Let’s be clear what is going on here: a very clever system is scrutinising masses of data from MRI scans, and combining that data to identify abnormal features. To use Safra Katz’s terminology, this is AI augmenting decision-making. The physician then looks at the analysis, deciding whether this is a benign tumour or not, whether to do a biopsy, and so on. It’s fast, it’s a huge help, a step forward in diagnosis. However, I have to ask: does that make the radiomics system ‘intelligent’?

The power of modern high-speed processing systems to analyse huge and increasingly complex data sets is remarkable. The value of these processing systems, from detecting early cancers to increasing road safety, to analysing jet engine performance to better understanding customer shopping patterns, is unquestionable – even if you don’t like the thought that your behaviour can be tracked equally as easily as potential threats to your health can be assessed.

Can these systems make decisions? Returning to the car example might help with exploring this a little further. While this isn’t quite the case yet, we can assume that a car will soon have an extensive sensoring and analytics system. As it is in motion, it will process the vast amount of data its sensors collect, while maintaining its progress along the road, avoiding parked cars, obeying traffic controls, and so on. Let’s imagine a truck pulls out suddenly from a few yards ahead. Faster than you or I could do so, the car stops, or overtakes since the other side of the road is clear. Back on course, it continues to process information and travels safely towards its programmed destination. The ‘decision’ the car implemented was a function of programmed instructions: avoid hitting people, avoid any objects, do not pass another vehicle unless the road is clear and to do so would not exceed the speed limit, etc. Rule based decision making.

Let’s increase the complexity. A young boy runs out into a narrow road. Almost instantaneously, the computer calculates it is impossible to brake in time. The sensors can ‘see’ that there isn’t a car coming the other way, but there’s a group of children on the other side of the road. Swerving, the car will be unable to avoid hitting the group of children. The system is being asked to follow one of two possible paths, both of which require breaking one of the key instructions: in other words, the car is being confronted with ‘a decision involving choice without precedent’, rather than following an instruction. What will the car’s computer do?

This is the ‘trolley problem’, of course.[vii] The essence of the (in)famous trolley problem is simple: you are driving a tram, and come around the corner to confront a very disturbing situation. There are five people working on the track in front of you, and you don’t have time to stop before you hit them; however, there is a side track which you can turn onto, but there is one workman there, too. What do you do? The point of the trolley problem is that there isn’t a ‘correct’ answer, as it is essentially a question about ethics: to act or not to act, to choose to save five at the cost of one or not. In my car story, it’s the same: save the boy at the cost of two or more children on the side of the road, or save them, at the cost of the boy’s life.

What will we program into the car’s computer? Add to avoid hitting people: ‘but if you have to hit a person, hit the smallest number’? In my example, the person in front of the car was a young child, its long life in front of it, but now imagine the people on the other side of the road are two elderly drunks, swaying as they walk! Are we going to input all sorts of additional rules, and yet still expect the computer to choose the right path to take? What would be the ‘right path’?

Safra Katz had it right when she said AI is an approach that can augment decision making by providing better data faster. Only poor imperfect human beings should make decisions that matter, however, either in terms of the rules they put into automated systems, or the decisions made where there are no clear or logical answers. Only humans can deal with unanticipated choices, such as when the remotely piloted aircraft is about to deliver is weapon and the scenario has changed: the building isn’t being used for a meeting room but has reverted to its normal role as a schoolhouse, and a stream of children suddenly appear, running out at playtime.

I started with self-driving cars for two reasons. One was because I was concerned about language. Cars don’t have ‘selves’. We are slowly slipping into a way of thinking and talking about technology which suggests that the latest developments allow us to create machines that “think for themselves”. Not so, not yet, anyway: I would argue that it won’t be for a very long time.[viii] But more to the point, it is not just the words we use. We don’t want machines that think for themselves. I certainly don’t want to own a car that decides, for itself, that it would like to go for a drive somewhere, just because it felt like it. Brings to mind all those scary SF films about robots gone wild! Nor do I want to imagine the development of drones wandering around in the sky, thinking about blowing something up – of the drone’s choice!!

I’d like to go one step further. Right now, I don’t believe it is possible to develop machines that think. Thinking is about making choices, not about pre-programming rules for choices. Our burden is that we have the responsibility to make choices. AI can provide us with better information, faster, more accurate, augmenting our decisions. But we should never contemplate getting rid of our burden, even though we often do such a bad job.

Perhaps that’s an argument for spending more money on education, and less on autonomous vehicles. If only …

[i] Should you want to know more, any modern motor car company’s website is a source of delight.

[ii] Page 12, Unmanned Aircraft Systems, UK Ministry of Defence, September 2017, page 12. It was nice to see the British prefer the terminology ‘remotely piloted aircraft’. Unmanned sounds so dreadfully demeaning!!

[iii] The name ‘flying bomb’ was originally applied to the V-1 devices, less sophisticated than the V-2s, and easier to see and force to crash

[iv] You can count on Thomas Friedman to give another of his jumbled combination of facts, assertions and arrant nonsense on this topic: as it happens, he was in the New York Times on this topic as I was finishing this blog – here he goes: <https://www.nytimes.com/2018/01/16/opinion/while-you-were-sleeping.html>

[v] Art Kleiner interview with Oracle CEO Safra Katz, At Oracle, Great Technology is not Enough, Strategy and Business, 12 January 2018

[vi] V S Parekh and M A Jacobs, Integrated radiomic framework for breast cancer and tumor biology using advanced machine learning and multiparametric MRI, Nature Partner Journals, Breast Cancer, 14 November 2017.

[vii] < http://www.trolleydilemma.com/>

[viii] I know there is a massive literature about heuristic decision making and open-ended algorithms. However, these are about developing sophisticated ways to analyse and make choices between facts, according to rules. “Thinking for themselves” as a shorthand for “being like us” is also about ethics, feelings, even about being bored and deciding not to bother about something!

Recent Posts

Categories

Archives