Artificial intelligence: Poisoned gift?

We cannot stop scientists from venturing into the future along paths unknown. We have to be aware that there will be opportunities, but there will also be risks, all of which cannot be foreseen

The mention of Artificial Intelligence (AI) in Budget 2018 has kindled expectation that there will be some major developments in this sector which will benefit the country’s economic prospects, since that is what measures in the Budget are aimed at. Of course, we are already using AI-enhanced devices, such as the smartphone (touch/voice recognition), the computer (search engines, spam filtering), surveillance systems and so on. Like with many things we do, essentially we will be end-users of AI applications and devices, given our country’s size and capacity: the advances and innovations in the field will emanate from specialized centres researching all aspects of AI.

“Starting in 1994, when the first web search engines were launched, there have been several developments: among others, IBM’s Deep Blue beat world champion Gary Kasparov at chess; Amazon replaced human product editors with an automated system; Google launches translate; Apple released Siri, the voice operated personal assistant that can ‘call home’ for you; IBM’s supercomputer Watson beat two human champions at TV quiz game ‘Jeopardy’; Google’s driverless car navigated traffic and its AlphaGo defeated Lee Sedol, one of the world’s leading Go players…”


AI entered its present phase of ‘heightened optimism and investment’ in the mid-1990s, after going through two spells of what has been termed ‘AI winter’, starting in 1973 and 1987 respectively, when there was a withdrawal of public funding as progress stalled, according to an article, ‘Intelligence Reinvented’ that was published in an issue of the magazine New Scientist’s  ‘The Collection’ (Vol Four/Issue Three) that covered ‘Essential Knowledge’, being ‘everything you need to know to make sense of the 21st century’, and is the source of information for what follows.

Progress since has been so rapid that it has not only overwhelmed the most optimistic, raising concern about erosion of privacy and autonomy – such as electronic ID cards which can tempt  States to play Orwellian Big Brother –,  but it has even given rise to apprehensions of an existential nature amongst high profile scientists. In fact, no less a luminary than world-renowned theoretical physicist late Stephen Hawking was worried that  somebody will create AI that will keep improving itself until it’s eventually superior to people – and replace humans altogether! He thought that this capacity of self-improvement will make AI become ‘a new form of life’.

He had expressed fears that AI could grow so powerful it might end up killing humans unintentionally, and he ‘called for technology to be controlled in order to prevent it from destroying the human race, and said humans need to find a way to identify potential threats quickly, before they have a chance to escalate and endanger civilisation’. This scenario falls into the category of what another great, late French geneticist Albert Jacquard, had referred to as ‘les effect pervers de la science’ – the unintended negative fallouts of science.

But what are we to do? We cannot stop scientists from venturing into the future along paths unknown. We have to be aware that there will be opportunities, but there will also be risks, all of which cannot be foreseen.  As the article in The Collection underlines, ‘Modern artificial intelligence is a brilliant and powerful technology, but also a fundamentally disruptive one’.

Perhaps even the group of futurists who gathered at Dartmouth College, in Hanover, New Hampshire (USA) in the summer of 1956 had never imagined that Artificial Intelligence, a term that they coined, could pose a threat to humankind. Their view, as stated in their funding application, was that ‘every aspect of learning or any other feature of intelligence can in principle be so precisely described that a machine can be made to simulate it’. And their ambition was ‘to make machines use language, form abstractions and concepts, solve kinds of problems now reserved for humans, and improve themselves’.

After nearly sixty years, and these two AI winters, has this ambition been fulfilled? If we go by some epochal events that have taken place in the 1990s, it would seem that the answer is in the affirmative. Starting in 1994, when the first web search engines were launched, there have been several developments: among others, IBM’s Deep Blue beat world champion Gary Kasparov at chess; Amazon replaced human product editors with an automated system; Google launches translate; Apple released Siri, the voice operated personal assistant that can ‘call home’ for you; IBM’s supercomputer Watson beat two human champions at TV quiz game ‘Jeopardy’; Google’s driverless car navigated traffic and its AlphaGo defeated Lee Sedol, one of the world’s leading Go players.

Any wonder that all this had Stephen Hawking worried?

Of the ‘wish list’ the one thing that may escape us is whether an AI machine can ever ‘form abstractions and concepts’; so far, in spite of the fact that AI machines have shown evidence of ‘learning’ – that is an adaptive behaviour that is akin to human intelligence – as yet the ‘agent has no internal representation of why it does what it does’, what has been referred to as ‘the unreasonable effectiveness of data’. In other words, though we may feel as if the AI machine is intelligent in the human sense, as a matter of fact it is an automaton – even when it is self-improving.

After all, it is fed with data which we humans choose to enter into it, and cannot generate date of its own. Its ‘self-learning’ takes place by statistical manipulations of data in an unprogrammed manner, and that is where the risk lies that the output may be something unexpected and potentially risky. That is the ‘unknown unknown’ that poses the existential threat feared by Stephen Hawking, and all we can say at this stage is that we have been forewarned and have to be on the lookout – although we do not know how, and what to expect!

As the article points out, the challenges of AI include: surveillance (knowing our location, discrimination, browsing history, social networks); unintended discrimination (insurance, policing); persuasion (nudging people to follow specific links for products etc); unemployment (as the machine replaces humans by doing certain jobs more effectively); and addiction – don’t we know!

No doubt there are enormous benefits – in communications, healthcare, transportation, schooling to name a few that are already within our experience, but given the potential risks, perhaps we ought to ‘tread carefully’, is the advice given.

In the last century, Jacob Bronowski wrote a book called ‘Is man a machine?’ Yes and no. Yes because his bodily mechanisms function according to the laws of physics and chemistry. No because a machine – not even an AI machine – cannot make a man, but a man can make a machine. I hope I am wrong, though…

 


* Published in print edition on 29 June 2018

Add a Comment

Your email address will not be published.