The Ascendance of Artificial Intelligence
The 2004 movie i-Robot engaged us in imagining our future relationship with intelligent robots. In one powerful scene, Sonny, the next generation NS-5 robot, stands on a hill surveying the prior generation robots imprisoned below. A generation eclipsed by technology. Relegated to low cost storage in shipping containers.
This dystopian view may indeed be our future. But not as foreseen in the movie. As artificially intelligent robots and driverless cars increasingly displace us from employment, Universal Basic Income schemes are being devised to deal with large scale human obsolescence. Politicians envision low cost 3D printed ‘shipping containers’ as a viable housing solutions to stretch the diminishing tax payer dollar. A generation eclipsed by technology. Relegated to low cost storage in shipping containers.
A Path Well Worn
Artificial Intelligence is an amplification of technology we have grown comfortable with. According to Klaus Schwab in his book “The Fourth Industrial Revolution”, the top three companies in Silicon Valley now produce in revenue what the top three companies did in Detroit in the 1990’s. But with approximately 90% less employees. We gratefully accept today the convenience and benefits the technology age has delivered. We revere people like Steve Jobs whom augmented our lives with useable technology. Artificial Intelligence holds the promise to amplify a similar quantum improvement in productivity. While today we unknowingly use Artificial Intelligence every day, we fear its implications for all the right reasons.
How does it work?
Artificial Intelligence has arisen from the intersection of three exponential technology laws. Moore’s Law, Metcalfe’s Law and Kryder’s Law. An abundance of processing, connectivity and data has given rise to algorithm using enormous amounts of data to make decisions. These self-morphing, self-programming algorithms continuously simulate logic and learn scenarios. Like a child, deep learning embeds knowledge so it can focus on learning new things. It does this at speeds only limited by processing power. In simple terms according to Moore’s Law, Artificial Intelligence doubles it’s ‘intellectual capacity’ every eighteen months. Better, faster, cheaper.
Great promise. Great risk.
Like all breakthrough technologies, there is great promise. And great risk. With increasing frequency and velocity there are major scientific discoveries. Breakthroughs in cancer detection and treatment. Breakthroughs to help the paralyzed walk. The blind to see. Science fiction of the past, evolving in our present. At a speed and scale unprecedented in history.
Driverless cars hold the promise of decimating road fatalities. According to World Health Organization, we lose over a million lives per annum globally on our roads. Yet this human benefit belies a human cost. Millions of people will lose their employment in exchange for the lives we preserve. Morally we may argue a small price to pay. Ethically and societally we may think a price too high.
In the Obama administration’s October 2016 paper on Preparing for the Future of Artificial Intelligence, there is very clear identification of the potential for widening of economic and social inequality. According to Frey, C. B. and Osborne, M. A. (2013), The Future of Employment: “OECD data shows on average 57% of jobs are susceptible to automation, this number rises to 69% in India and 77% in China”. While there is a significant gap between potential and actual, there is no doubt that both income and employment gaps will widen significantly.
Are we ready?
Our ability to analyse data at scale raises legal, regulatory and ethical questions. Our search query data alone is 99.999% accurate in diagnosing pancreatic cancer at a point where medical intervention could triple the life expectancy of the unsuspecting user. But we choose to remain quiet and watch people die rather than breach regulated privacy. The precision of technology challenges the ambiguity of our humanity.
The impact of machines taking over decision making fundamentally challenges our regulatory and legal system. If a driver car kills a pedestrian in a car accident to save the passengers, we empathise. No time to react. It was an accident. It was unavoidable. We imagine ourselves in that position. If a machine were to calculate an outcome, we see it as calculating. In law, calculation is the difference between manslaughter and murder.
Our expectation of technology is precision and record. We expect the black box to unravel the mystery. But Artificial Intelligence algorithms are as dynamic as our thoughts. They evolve continuously as they learn, challenging regulators and the legal community to determine what they were ‘thinking’ at the time of incident. In the case of Tesla’s crash, investigators initially struggled to determine which technology was in charge at the time of the crash. We persist with the technology because the accident rate is already well below that of human drivers. We must not let perfect prevent progress.
Whilst it is easy to visualize the impact of driverless cars on jobs, unlike prior technology revolutions, the impact of Artificial Intelligence goes beyond entry level jobs. Accountants, doctors, lawyers, actuaries and auditors will all have various elements of their jobs augmented or potentially replaced. If it is technical and repeatable, it is an obvious candidate for replacement. The irony being, doctors may be more at risk than nurses in displacement by Artificial Intelligence. The simple truth according to Drew Perez, CEO of Adatos is “Don’t send in a man to do a machines job.”
The big question is ‘are prepared? While technology advances at ever accelerating speed, our mechanisms to cope continue to evolve at more modest human rates of change. Politically, economically and socially. If the movie ‘The Big Short’ focused attention on the fragility of our financial system, what happens when millions can’t pay their mortgages again because their roles have become obsolete? While the promise of Artificial Intelligence for good is powerful, the dangers are worthy of public investment attention and scrutiny. As Joi Ito so eloquently articulated “This year (2016), artificial intelligence will become more than just a computer science problem. Everybody needs to understand how AI behaves.”