Niklas Rosenberg

View Original

Asimov's Laws of Robotics will resurface when Foundation airs

A few days ago I wrote about my excitement over the news that Apple TV+ will launch Foundation as a TV series in 2021. The creators have taken upon themselves a monumental task, but I still think that the show has the potential to become a game changer for Apple TV+.

As Foundation airs on Apple TV+, Asimov’s Laws of Robotics will again be talked about. Screen capture from WWDC 2020 live stream.

Foundation will also make us think about our relationship with robots and artificial intelligence (AI), not in a Westworld kind of way, but through the Laws of Robotics that author Isaac Asimov introduced in his books. The three main laws are as follows:

  1. A robot may not injure a human being or, through inaction, allow a human being to come to harm.

  2. A robot must obey the orders given it by human beings except where such orders would conflict with the First Law.

  3. A robot must protect its own existence as long as such protection does not conflict with the First or Second Laws.

In the fifth novel of the Foundation series a zeroth law was introduced, with the original three suitably rewritten as subordinate to it:

  • 0. A robot may not injure humanity, or, by inaction, allow humanity to come to harm.

The original three laws were written a long time ago (first introduced in a short story from 1942) and much has happened since then in terms of our own technological development. It seems that Asimov was mainly envisioning androids, or human-like robots, when he wrote his laws – but already now we know that robots and AI can take many different forms and shapes. While elegant and easy to grasp, Asimov’s framework is probably too coarse for the robots and AI we’re about to develop.

Still, despite some obvious limitations, Asimov’s laws are often mentioned as a template or starting point for guiding our AI development. They certainly have inspired many to think about robot and AI ethics and thanks to the upcoming Foundation TV series, more people will again think about these issues.

And these are indeed important issues. As I’ve written before, I strongly believe that we should put more effort and resources into development of a robust moral framework for AI that all players in the field could agree upon. It’s something we will need even if it might take a long time before we develop artificial general intelligence (AGI, also known as “strong AI”).