26 d’abril 2022

Against black box medicine

Explainable machine learning practices: opening another black box for reliable medical AI 

In regulating medical AI, we should address not only algorithmic opacity, but also on other black boxes plaguing these tools. In particular, there are many opaque choices that are made in the training process and in the way algorithmic systems are built, which can potentially impact SaMD-MLs performances, and hence their reliability. Second, we have said that opening this alternative black box means explaining the training process. This type of explanation is in part documenting the technical choices made from problem selection to model deployment, but it is also motivating those choices by being transparent about the values shaping the choices themselves—in particular, performance-centered values and ethical/social/political values. Overall, our framework can be considered as a starting point to investigate which aspects of the design of AI tools should be made explicit in medicine, in order to inform discussions on the characteristics of reliable AI tools, and how we should regulate them. We have also highlighted some limitations, and we have claimed that in the future it will be necessary to empirically investigate the practice of machine learning in light of our framework, and to identify more nuances in the values shaping ML training.

We want to end this article by repeating that the problem of explaining opaque technical choices is not an alternative to explain the opacity lying at the algorithmic level. Unlike London, we think that the worries about algorithmic opacity in medicine are more than justified. However, we leave any consideration on how the two opacities are connected to each other for future works.

Huge business interests are at  stake, who cares about citizens?



Didier Lourenço at Galeria Barnadas