Google needs to be transparent with their AI Ethics Board

Google is investing a lot in machine learning and artificial intelligence (AI). Probably more than any other company. The good news is that Google has set up an AI ethics board. The bad news is that they don’t disclose the individuals on that board or what the board actually does.

The co-founders of DeepMind AI lab said they would only agree to the acquisition if Google promised to set up an AI ethics board.

My firm belief is that we should seriously start thinking about a moral framework for AI as soon as possible. I’ve even written a blog post explaining why this is important. Almost everybody who’s taken a serious look into AI agrees that ethics and morality will play a crucial role in the future of AI. Some say it will be a matter of life and death.

Just think about where AI is already today and how it’s creeping into our everyday lives. You’ll find self-learning systems all over the internet, Google Search being the prime example. Facebook is including more and more AI into their products. The News Feed is learning all the time from your behaviour and from data you (and everybody else) provide. Soon it’s even recognizing and interpreting photos (and among other things describing them for the blind). 

You’ll soon find AI in many physical products as well. It’s not just Siri on the iPhone or Cortana on Microsoft’s devices. There will be a lot of machine learning for example in transportation and many car manufacturers like Tesla, Volvo, and BMW, are seriously betting on self-driving cars. Drones is also an area where we’ll arguably see a lot of AI in the future.

Then fast-forward 50 years into the future. It’s quite reasonable to expect that we’ll have sophisticated AI almost everywhere: in our homes, in businesses, in healthcare, and unfortunately also in warfare.

It’s hard to say exactly how sophisticated AI will be at that time. We might not yet have human-level machine intelligence, but our devices and programs will certainly be a lot smarter than they are today. And they’ll be making a lot of important decisions on our behalf. 

So it’s easy to see why ethics is crucial in the development of AI. The difficult question is: who gets to decide what’s good and bad, what's right and wrong? After all, every society and religion have their own take on morality. Philosophers have been thinking and talking about these issues for ages. There simply isn’t one set of rules that everybody could agree upon, in contrast to e.g. mathematics and physics. 

Fortunately Google has realized that they need to have bright minds thinking about the various ethical aspects of AI. In fact, setting up an ethics board was part of the deal when Google acquired AI company DeepMind in 2014. DeepMind is the company behind AlphaGo AI, which a few weeks ago beat Go Champion Lee Sedol. Until recently, Go has been notoriously difficult for computers to master because of the sheer number of potential moves. 

But according to Business Insider and other sources, Google is still refusing to reveal who sits on the ethics board and what they're working on. I’m having a really hard time understanding why this is the case. Could there be some reasons related to business strategies and trade secrets? Or do they fear that ethics is such a sensitive topic that they don’t want to expose board members to public scrutiny (and probably some degree of flaming and trolling as well)?

In my view Google is in such a unique position that it must be transparent with it’s AI ethics board. It's very likely that some of the most significant AI development breakthroughs in the near future come from Google. Considering what’s at stake, we need to have a public, ongoing, and open-ended discussion about AI ethics. And the discussions needs to start as soon as possible, with participants from all over the world.

Besides, I’m convinced that transparency would be beneficial for everybody, including Google. After all, the principle of submitting research results to public critical observation and examination has worked well in science for a long time. It’s a way to ensure that results gradually improve and it allows others to continue building upon previous work. There could also be a lot of good PR value to get out of this whole issue.

I realize Google is a for-profit company and not a charity organization, which means they don’t have to do things that aren’t mandated by law. In this case however, I would argue that transparency would not only be the right thing to do, but also the smart thing to do.