1994 – Consequences

I recently listened to a podcast explaining how hard it is to look back without seeing the past in the light the consequences that flowed from the very events we’re trying to understand.  At times it can be quite confusing.  Surely, you think to yourself, anyone could have seen what was going to happen, but they didn’t, and now we find it hard to comprehend why not.  When I was looking over some of the events in 1994, consequences were on my mind.  This was the year in which so many momentous events occurred.  In no particular order, the first women priests were ordained in Church of England; Nelson Mandela became South Africa’s first black President; Amazon was founded; the Provisional IRA declared a cease fire in Northern Ireland; and work began on the Chinese Three Gorges Dam.  Perhaps not on such an impressive scale, there were some other significant events in 1994:  the Linux operating system first appeared; a Pentium computer beat world chess champion Garry Kasparov; and the first PlayStation console was released.  Oops, I almost forgot, this was also the year of my 50th birthday.  One more item: this was when Aishwarya Rai was crowned Miss World!

Leaving on one side myself and the beautiful Ms Rai, (she certainly left me to one side), each of those four major events heralded huge changes.  Not all have turned out as might have been expected, even if we can’t remember exactly what we were thinking.  Changes in the church, and changes in Northern Ireland?  Yes, those led some outcomes that might have been anticipated.  However, the path for South Africa has wandered far from Mandela’s focus on reconciliation, with racial and economic differences as bad if not worse today than they were in 1994.  Did no-one realise the Three Gorges Dam would be compromised by a huge build-up of silt, and the balance of water distribution for China be fundamentally disturbed?   (I’m tempted to throw in: Was the Chinese leadership unaware power tends to corrupt, and absolute power corrupts absolutely?)

Why do we still ignore what we should be anticipating?  Why do we fail to spend time thinking through likely potential future problems?

This isn’t a new issue.  There’s a ‘law of unintended consequences’.  In 1936, a sociologist, Robert K Merton, examined the topic in a paper in the American Sociological Review on ‘The Unanticipated Consequences of Purposive Social Action’.  What did he conclude?   His systematic analysis focused on purposive action, action that involves motives and a choice between various alternatives.  He concluded that “no blanket statement categorically affirming or denying the practical feasibility of all social planning is warranted”.  Thank heavens for sociologists!  Sadly, this incisive (or opaque?) commentary has been refined and weakened by others over time, and now the ‘law of unintended consequences’ has come to be used as an adage or idiomatic warning that an intervention in a complex system tends to create unanticipated and often undesirable outcomes.  Isn’t that a gloss on Murphy’s Law, that anything that can go wrong will go wrong?  At least Murphy’s law was humorous.

Let’s be clear.  Some unintended consequences can be positive.  Ships sunk in shallow waters have created artificial coral reefs, some saving species under threat.  Aspirin was developed as a pain reliever, but it turned out to be an anticoagulant that helps prevent heart attacks.  Yes, I’m aware those outcomes are often quoted, but what about the negative side?  When Victoria made safety helmets mandatory for bicyclists in 1990, there was a reduction in the number of head injuries and the number of juvenile cyclists killed in accidents decreased, but the risk of death and serious injury per cyclist seems to have increased.  Moreover, research at Sydney’s Macquarie University found the decrease in exercise caused by reduced cycling was counterproductive in terms of net health.  Another example?  We all know the advent of Prohibition in the US sparked growth organised crime, first in alcohol, and then in drugs.

There are so many examples I could include.  For Australians, the introduction of rabbits for food and as pets led to them to becoming a major problem, alongside those infamous cane toads brought to control cane field pests!  More?  How about the introduction of passenger-side airbags in cars leading to an increase in child fatalities in the mid-1990s because small children were being hit by airbags deploying automatically during collisions.  This was solved by moving child seats to the back of cars, leading to an increase in the number of children left in unattended vehicles, some dying under extreme temperature conditions.

I could leave it there.  My familiar point is that many modern technologies have had negative consequences, some of which were avoidable although others were unpredictable. For example, hospital infections have continued to grow and become harder to control, an unexpected side-effect of antibiotic resistance.  When the Gates Foundation attempted to save lives by providing mosquito nets impregnated with insecticide, the unexpected results were adverse environmental and human effects, as many villagers used the mosquito nets for fishing, leading to overfishing while also releasing noxious chemicals into previously safe waterways.  Looking ahead, let’s hope analysts are trying to determine how to address possible but undesirable consequences from initiatives to reduce global warming.

Enough examples?  Let’s go back to those other three items I noted in 1994: the appearance of Linux, a computer beating a world chess champion, and the appearance of PlayStation.  Of the three, the one that now we can see heralded some particularly pernicious unintended consequences was the chess match.  To set this in context, we need to go back a little.  Perhaps an ideal place and time might be New York some 65 years ago.  13-year-old Bobby Fischer was at the Marshall Chess Club in October 1956, playing in the seventh round of an invitational tournament, the Rosenwald Memorial, having won the U.S. Junior Championship earlier that year.  The other eleven players were all high-rated US chess champions.

After three drawn matches, Fischer’s next opponent was Donald Byrne, an International Master and former U.S. Open Champion.  Bobby was playing black, usually regarded as a disadvantage as the player with the black pieces is always following the moves of the other player.  I’d love to quote the whole of Frank Brady’s outstanding description of that game, but you’ll have to read his book, Endgame, and Chapter 3 for his summary.  After eleven moves, Fischer was doing well, but no more than that.  Byrne was confident, but Fischer “realized that there was an extraordinary possibility that would change the composition of the position and give a whole new meaning to the game. What if he allowed Byrne to capture his queen, the most powerful piece on the board? Normally, playing without a queen is crippling. But what if Byrne, in capturing Bobby’s queen, wound up in a weakened position that left him less able to attack the rest of Bobby’s forces, and less able to protect his own?”.

Brady continues, “The idea for the move grew on Bobby slowly, instinctually at first, without any conscious rationale. It was as though he’d been peering through a narrow lens and the aperture began to widen to take in the entire landscape in a kind of efflorescent illumination. He wasn’t absolutely certain he could see the full consequences of allowing Byrne to take his queen, but he plunged ahead, nevertheless.  If the sacrifice was not accepted, Bobby conjectured, Byrne would be lost; but if he did accept it, he’d also be lost. Whatever Byrne did, he was theoretically defeated, although the game was far from over. A whisper of spectators could be heard: ‘Impossible! Byrne is losing to a 13-year-old nobody.’ …  He won, and Hans Kmoch, the arbiter, a strong player and internationally known theoretician, later appraised the meaning and importance of the game:  ‘A stunning masterpiece of combination play performed by a boy of 13 against a formidable opponent, matches the finest on record in the history of chess prodigies…Bobby Fischer’s [performance] sparkles with stupendous originality.’”  It was to become known as The Game of the Century.

Then along came the technologists, and especially IBM, developing powerful computers they developed to defeat any chess player.  Now there are two ways we can consider this outcome.  For many people it is a potent example of the power of artificial intelligence:  you can ‘train’ a computer by feeding it with thousands of chess games, and let it ‘learn’ to recognise moves, typical opening, mid-game and finishing strategies, and it will become cleverer than any human. This is a function of processing speed, as a modern computer can work through many, many alternatives in seconds, moves that would take a human months to consider.  The computer does what a chessmaster does, only faster, and can assess millions of alternative outcomes at every stage of a match.  The conclusion must be that computers are intelligent.

Alternatively, we can see this is simply dumb processing.  Yes, the computer can assess all those possibilities, but it doesn’t ‘think’ about them, it merely carries out a huge number of rule-based calculations to identify the likely end point of each move and those that could follow it.  Dumb for sure:  the computer doesn’t know what it is doing, it is simply a calculating machine with a set of parameters and processes.  It ‘learns’ by storing away more and more possible paths, so that these can be assessed each time the computer is presented with a specific move.  What did Brady say about Fischer’s insight: “The idea for the move grew on Bobby slowly, instinctually at first, without any conscious rationale. It was as though he’d been peering through a narrow lens and the aperture began to widen to take in the entire landscape in a kind of efflorescent illumination. He wasn’t absolutely certain he could see the full consequences of allowing Byrne to take his queen, but he plunged ahead, nevertheless.”

Key in this is understanding the way computers make decisions.  The central process in computing is through executing a ‘conditional statement’.  This is in the form of calculation, using Boolean numbers which can only come to one of two values (zero or 1).  The computer program executes a specified process if the answer is zero, and a different process (or path) if the result is ‘1’.  This is sometime s expressed as a true/false choice:  if the answer to a calculation is recorded as ‘true’, then one preprogramed path is followed;  if the condition is not true, or ‘false’, then a different path is followed.  You can see how tricky language gets in this.  Conditional statements are an essential element in computer processing:  we talk about them as decision points.  However, there is no thinking involved, as the choice is automatic and specific, ‘if this, then that’.  Computers are dumb, and can’t say ‘I don’t like this alternative, so I will creatively come up with something else’!

Does this matter?  For much of the time, it doesn’t.  Carefully programmed, computers can work through data systematically and identify the points – the conditional statement choices – that are critical.  In fact, there are many situations where this data monitoring is being undertaken in the background.  For me, a specific and very relevant example is the continuous assessment of financial transactions.  In reviewing my credit card transactions, along with the billions of transactions for every other card, on a few occasions a computer system has identified one or more data anomalies.  Transactions in two different countries at around the same time?  I’ve done that before, ordering items online. But if the transactions were physical, not online?  Perhaps recording I have bought two first class air tickets from Japan to Germany for the next day, when another face-to-face transaction was in Melbourne.  These don’t fit with what has been seen before, and an automatic alert goes to my bank and to me, while my credit card is put on temporary hold.  My card details may have been hacked, and the computer analysis picked up that possibility.  On three occasions, I have been very thankful for computers examining my financial activity, checking for non-routine behaviour.

The example raises two very interesting points.  First, based on data analysis, the choice was automatic:  since this was reported, this action is required.  Second, the possibility of such an observable event had already been envisaged, and the appropriate action had been into the system in advance.  What was a new, unexpected and highly undesirable event for me was a transaction pattern that had been anticipated and an appropriate response was ready to be implemented automatically.  There had been thinking, but by an IT employee anticipating types of irregular card use transactions, and as a result the system was ready to act: no further thinking was involved, just a dumb computer following a predetermined process.  The requirement isn’t for intelligent computers, it is for intelligent analysts to work out what data must be monitored and determining what variations would lead to which alternatives for action, the choice points embodied in those computer-processed conditional statements.

Can everything be anticipated?  No, we’ve already seen cases of unanticipated consequences.  Given our reliance on data monitoring, it is important to remember that events can occur for which the computer system does not have an adequate or relevant response in its system.  This was shown dramatically in 2003, when there was a huge power blackout in the North-eastern and Midwestern US, and in Canada’s Ontario Province.

On Thursday August 14, heavy snow was falling, some of it freezing on to transmission lines in Akron, Ohio.  The transmission lines started to droop into foliage, and the computerised electricity grid system recognised a problem (the line was beginning to overheat) and the power to the transmission lines was switched off.   The software at the control room didn’t alert the operators as to what had happened, and they didn’t immediately start to reduce (or shed) load in that part of the system, nor redistribute electricity through other lines.  Instead, the system did as it had been programmed to do, and automatically disconnected the line, immediately transferring the power supply to a few other lines.  But those other lines didn’t have enough spare capacity to accommodate the extra current, their overload protection kicked in, initiating a cascading failure and an eventual huge blackout.  An operator aware of the issue would have shut off the power for the damaged line, but not have redistributed it.

Eight years later, another electricity supply disaster occurred.  On this occasion, a technician mistakenly shut down a transmission line in Arizona, part of the US Southwest Power Link.  Immediately, automatic switching took over, and eventually the whole of the electricity required in Southern California and part of Arizona was being drawn from one small power plant in San Diego.  The whole system collapsed, resulting in the ‘Great Blackout’ of 2011, with over 8m users losing power within 11 minutes of that initial action.  The problem is clear: computers can’t say “hang on, this is something new, I need to think about it”.

In a world of unanticipated consequences, we often focus on positive and negative outcomes, from the unexpected benefits of Aspirin through to the disastrous use of mosquito nets for fishing.  However, there are other far-reaching consequences.  The success of a chess-playing computer in 1994 was a small but important example among the many developments and events which heralded an emerging mind-set that has had and will continue to have huge and far-reaching consequences.  We’ve become accustomed to leaders, managers and others with important responsibilities handing over much of their decision-making to computer systems in the deeply embedded belief computers are intelligent.  They are not.  They only do what they’ve been programmed to do.  I’ll say it again: beware, computers are dumb!

Recent Posts

Categories

Archives