The Gryphon discusses the emerging threat that AI is becoming and how it will affect everyone’s life…
On the 15th March 2016, AlphaGo beat Lee Sedol, the renowned world champion, at the ancient game of ‘Go’. AlphaGo didn’t simply learn the best moves, as there are too many possible configurations on a 19×19 ‘Go’ board to be learnt or stored in modern software. After all, there are more possible configurations of the board than atoms in the known universe.
The artificial intelligence program (AI) ‘taught itself’, over a period of 40 days, the fundamental principles of the game and tuition. It took humans almost 3,000 years to build this up over the duration of the board games appeal. This surpassed what experts in AI thought impossible only a few years ago: for an AI to be sophisticated enough not only to mimic human abilities but to be able to teach itself techniques at an exponentially faster rate than humans.
If these systems are to keep improving at the same rate in decades to come, then our future societies may be drastically different. If AI is capable of autodidactic learning, the job sectors that were previously thought safe from automation may be consumed by new and more powerful AI. Even jobs such as lawyers, artists, scriptwriters, managers and graphic designers may come under threat in the future as AI becomes more and more sophisticated.
Some believe this would lead to an Armageddon scenario where AI could take over vast swathes of state apparatus. This is not simply confined to science fiction writing; with high flying technological luminaries such as Bill Gates, Elon Musk and Steve Wozniak all warning of the potential pitfalls of AI AI being present in the online domain without safety features being built into them.
But how realistic are these apocalyptic premonitions? Instead, could AI usher in a new era of prosperity where computational tasks and productivity are boosted?
Erik Brynjolfsson and Daniel Rock, from MIT, point out that AI is still many decades away from being able to adapt to work environments and being able to self-adjust. Ideally, the increase in AI use could free up human mental labour activity from boring mundane tasks and allow employees to spend more time on higher-level tasks.
There is also the vision that this will lead to a much shorter working week of maybe as little as 10-12 hours, with increased leisure time for citizens and a universal basic income (UBI) since most economic work would be done by AI and robotics. However, for sceptics, the point at which artificial intelligence outstrips our cognitive abilities and where machines go on to improve themselves at an exponential rate, is still a fantasy.
It is also envisioned that AI systems will be able to help better diagnose illness in the future, but with this warrants questions such as whether AI has the capability to judge better than humans. An AI program called ‘Deep Patient’ in New York, used a database of over 700,000 patients to learn how to accurately diagnose a new patient’s diseases. This included schizophrenia and other psychiatric disorders which are notoriously difficult for humans to identify. The tool, unfortunately, offers no clues as to how it was capable of identifying these disorders as the system analytics are so complex, often with multiple hidden layers of programming. This made it hard for analysts to ascertain how the program came to the conclusions that it did.
There are already calls for a legal framework to be created which makes it mandatory for AI decisions to be accountable, and essentially, explainable, with debates having been held by the EU across the summer of 2018. It is expected that by the end of next year, over half of the leading world healthcare systems will have adopted some form of AI.
AI will hopefully aid countries in tackling challenges such as issues surrounding an ageing population, reduce traffic collisions and deaths with the onset of automated cars. It may also help deal with large computational problems such as the best methods to tackle climate change and how to avoid economic disasters such as that of 2008 by running complex analytics.
Currently, AI is far away from the levels of sophistication needed for it to have a pivotal role in society, but with the ever-accelerating technological breakthroughs of the 21st century, the onset of AI may be nearer than previously assumed.