Artificial Intelligence

Black Box vs Glass Box | Maixent Géméri ASSI | CI |

Black Box vs Glass Box

In the airline industry, the black box is an invaluable equipment. During an intriguing event (crash, malfunction, etc.), the experts focus all their efforts on finding and “getting hold” of this famous “black box”. Then, some time (which can be weeks, months or years) later, they produce a report that highlights the causes of the anomaly(ies), even the conditions prevailing when the incident occurred.

We find the expression “black box” also in computer development where, to ensure the proper functioning of a software, it is subjected [the software] to undergo so-called black box tests. The objective is to answer the question: “does system X manage to do A?” regardless of how the designers structured their program. If, the black box reveals its content in the air, in computer science, the black box is supposed to answer a question without revealing the secrets it contains.

In one of the computer science branches called artificial intelligence, in particular in modeling, a distinction is also commonly made between black box modeling and those called glass box modeling. The first are considered to be almost uninterpretable and whose functioning is opaque while the second are more or less easily understood with more transparent behavior.

Each of these two approaches has its supporters, advocates and critics. The major reason mentioned which limits the use of black box models is interpretability.

Does this mean that we must rush to reject these types of models?

For Elizabeth A. Holm, the answer is NO. According to her, this approach is used routinely. All humans base their decisions on their internal black box (the combination of their judgment and experience). Elizabeth mentions two (2) reasons to use these kinds of models: when the cost of a wrong answer is relatively low compared to the value of a correct answer and when the black box produces better results. Despite the benefits of using these models, their deployment in critical situations is problematic.

In order to better understand how they work, a trend called “Explainable AI (XAI)” has emerged with the ambition to provide justification for the results of black box models. Is this solution acceptable?

For Cynthia Rudin, we must simply stop trying to explain “opaque” models when the stakes of decisions based on these algorithms are high. These are medical decisions, judicial decisions for example.

Rather, the author recommends using interpretable models from the start. The arguments put forward are, among others, the inability of the explanations to provide the details necessary for the understanding of the results, the non-fidelity of the explanations with respect to the calculations of the algorithm, the risks of errors due to the process. decision making that gets more complicated.

That said, research on understanding how these methods work continues. We can cite the works of Microsoft Research and the University of Montreal on the learning dynamics of deep neural networks, or those of Harvard University and the Santa Fe Institute on the principle of the bottleneck of the information from Naftali Tishby. This research, even if it is far from explaining all the behaviors of certain tools, makes it possible to solidify the theoretical foundations. These models therefore seem not to be as impenetrable as that. Certainly, very advanced techniques are needed for their meta-analysis.

Ultimately, the air and AI on this point are probably not that different: the two fields need high-level experts and time to speak their black boxes. It is only hoped that over time, these algorithms will deliver more secrets to the research and practitioner community. In the meantime, the choice of a method must be preceded by careful consideration.

The Author

Maixent Gémeri ASSI (Linkedin profile) holds an MS in Artificial Intelligence from the University of Aberdeen and a Masters in Statistics and Applied Economics from ENSEA in Abidjan. Maixent has more than 10 years of experience.

Since January of this year, he has been Monitoring and Evaluation Coordinator at the Innovations for Poverty Action research institute, Abidjan office (Ivory Coast)

Leave a Reply

Your email address will not be published. Required fields are marked *