Stephen Hawking Lays Down the Law on AI

0
418

At a recent conference at the Future of Life Institute in Boston, rock star theoretical physicist Stephen Hawking, along with several hundred others, endorsed 23 new principles concerning the ethics of artificial intelligence.

Other notable figures in the world of AI to sign were: billionaire Elon Musk, DeepMind’s Demis Hassabis, OpenAI’s Ilya Sutskever, Siri co-founder Tom Gruber, author Ray Kurzweil, and for absolutely no reason at all, actor Joseph Gordon-Levitt.

Don’t get me wrong, I like Joseph Gordon-Levitt as much as the next guy, but he had as much business getting involved with FLI as Humphrey Bogart would have had signing the Russell-Einstein Manifesto.

The Asilomar AI Principles were the result of a discussion on how artificial intelligence should remain beneficial to humanity, rather than becoming something which would “spell the end of the human race”; according to a warning from Stephen Hawking in a 2014 BBC interview.

One of the significant principles agreed upon was that any future investments in AI should also be accompanied by funding for research into the benefits of the technology. Questions researchers will have to ask include; how to keep AI from malfunctioning or being hacked, how to maintain people’s resources and purpose while growing the technology, how to update legal systems to manage risks associated with AI, and most importantly, what set of ethics and values AI should be aligned with.

There are many risks associated with artificial intelligence, and all of these principles aim to keep us away from Terminator territory. Researchers will be expected to engage in cooperative efforts in order to avoid cutting corners; an issue normally associated with the race to be the first company to develop a new technology.

Transparency, proper education of policymakers, and safety-protocols will all also be key ingredients in avoiding our imminent doom.

While the extreme dangers of artificial intelligence may bring to mind a Terminator-like scenario, there are much more immediate and equally damaging problems which could arise with the advancement of the technology. Court systems where AI impartially judges cases and doles out sentences, AI controlled transportation systems (airlines, taxis, etc), and perhaps most importantly, control over military apparatus.

Another significant problem which needs to be considered is the economic impact AI would have on an already fragile job market by replacing employees with advanced systems.

These important questions, and many more which were raised during the conference, hope to challenge both future researchers and investors to consider the weight of their actions before continuing down a potentially dangerous road.

The problem is that, unfortunately, these principles are not a set of laws, nor could they be treated as such since they are open to the interpretation of the reader. This means that while current leaders in AI community agree upon the direction and risks of the technology, developers won’t necessarily be legally bound by any strict code of ethics.

After all, any endeavor of technological advancement is always intrinsically susceptible to human error, particularly, deviation from its founding principles. Just as the 1955 manifesto by Bertrand Russell and Albert Einstein didn’t stop nuclear proliferation, the Asilomar AI Principles, alone, cannot hope to prevent catastrophe, but it’s a pretty good start.

One thing is for certain, it will be interesting to see where we go from here.