As we’re seeing more and more solutions around us leveraging artificial intelligence (AI), typically some form of machine learning (ML), many have begun to wonder how those solutions actually work. Especially when it comes to processes like medical diagnostics and credit scoring, people have started to ask the question: how did system X reach conclusion Y?
ML can be described as “computer algorithms that improve automatically through experience”. The algorithms build a mathematical model based on sample data, or "training data", in order to make predictions or decisions without being explicitly programmed to do so.
Some of the ML methods and techniques are such that the results of the solution can be understood by humans. That’s called Explainable AI. In contrast, there’s Black Box AI, where even their designers cannot explain why the solution arrived at a specific decision.
It’s not a big surprise that many have voiced concerns about the so called AI Black Box problem. The fact that no-one really knows how the most advanced algorithms do what they do could be a serious problem. Among others, a report by the AI Now Institute at NYU has recommended that US government agencies responsible for criminal justice, health care, welfare, and education should avoid such technology.
But how big of a problem is this?
Some take it very seriously and some say we shouldn’t worry too much. According to Vijay Pande at Andreessen Horowitz, we shouldn’t fear the black box of AI, at least not in healthcare, because it isn’t a new problem. After all, human intelligence itself is, and has always been, a black box. He writes:
There’s particular concern about this in health care, where AI is used to classify which skin lesions are cancerous, to identify very early stage cancer from blood, to predict heart disease, to determine what compounds in people and animals could extend healthy life spans, and more. But these fears about the implications of black box are misplaced. AI is no less transparent than the way in which doctors have always worked — and in many cases it represents an improvement, augmenting what hospitals can then do for patients and the entire health care system.
But then again, what if a Black Box AI systematically makes errors that go undetected for a long time? That could have severe consequences in areas like healthcare or the criminal justice system, just to name a few.
The reasons behind the errors could be numerous, ranging from engineers using biased training data to something we simply can’t comprehend. After all, it’s a black box system and by definition, we don’t know what’s happening under the hood.
With luck, human workers (e.g. radiologists and astronauts) who are “managing and overseeing” the AI workers (that is ML algorithms) would detect some of the errors, but it could easily be that as AI gets more advanced, the human mind becomes a limiting factor. In practice we would need AI to understand AI and humans would become increasingly redundant.
It’s a bit similar to what’s happening in chess, where centaur chess (human players assisted by AI) was supposed to dominate chess for a long time. Well, that doesn’t seem to be the case anymore, as fully computerized AI chess systems seem to win most of the tournaments.
It’s unlikely that we would be able to ban Black Box AI completely, and quite frankly I don’t think a ban would be the best solution for anyone. If we want to reap the benefits of deep learning (i.e. the application of deep neural networks) in areas like cancer detection and drug discovery – which I think we should – then some other solution to the problem is needed.
What that solution will be remains to be seen, but in any case we should probably allocate more resources into the study of AI risks, precisely as we should do with research and development of AI ethics. A good moral framework for how to use AI in the future requires that we understand the risks involved, and that we develop methods and tools for mitigating those risks.