Es mostren les entrades ordenades per data per a la consulta black box medicine. Ordena per rellevància Mostra totes les entrades
Es mostren les entrades ordenades per data per a la consulta black box medicine. Ordena per rellevància Mostra totes les entrades

26 de maig 2023

L'explicabilitat de la Intel·ligència Artificial a les ciències de la vida

Explainable AI in Healthcare. Unboxing Machine Learning for Biomedicine

Solving the explainable AI conundrum by bridging clinicians’ needs and developers’ goals

Tot això va molt ràpid. I potser algú s'hauria de posar a revisar-ne els fonaments i les seves conseqüències. Em refereixo en concret al que es coneix com explicabilitat de la intel·ligència artificial. En aquest blog m'hi he referit com Black Box Medicine.

Caldria que els algoritmes que s'utilitzen puguin ser permeables al coneixement dels humans, o dit altrament, que si no és explicable doncs aleshores no sabem com s'arriba al resultat i que si ho repliquem tampoc sabem si sortirà el mateix (fiabilitat). A les ciències de la vida és encara més rellevant, perquè l'impacte pot ser determinant per la salut i la vida.

Avui aporto un llibre i un article. El llibre no l'he llegit, però tinc la impressió que és el més complet que hi ha per ara en aquesta qüestió des d'un punt de vista pràctic. L'article de Nature reflecteix les perspectives dels informàtics versus els clínics, i en una gràfica es mostra tot: 


I ja podem veure que no hi ha acord i que cal reduir les diferències de perspectiva. Per ara ha sortit guanyant els primers.

Del llibre en reprodueixo l'índex:

1. Human–AI Relationship in Healthcare

2. Deep Learning in Medical Image Analysis: Recent Models and Explainability

3. An Overview of Functional Near-Infrared Spectroscopy and Explainable Artificial Intelligence in fNIRS

4. An Explainable Method for Image Registration with Applications in Medical Imaging

5. State-of-the-Art Deep Learning Method and Its Explainability for Computerized Tomography Image Segmentation

6. Interpretability of Segmentation and Overall Survival for Brain Tumors

7. Identification of MR Image Biomarkers in Brain Tumor Patients Using Machine Learning and Radiomics Features

8. Explainable Artificial Intelligence in Breast Cancer Identification

9. Interpretability of Self-Supervised Learning for Breast Cancer Image Analysis

10. Predictive Analytics in Hospital Readmission for Diabetes Risk Patients

11. Continuous Blood Glucose Monitoring Using Explainable AI Techniques

12. Decision Support System for Facial Emotion-Based Progression Detection of Parkinson’s Patients

13. Interpretable Machine Learning in Athletics for Injury Risk Prediction

14. Federated Learning and Explainable AI in Healthcare

Si un dels principis de la bioètica com diu Floridi, hauria de ser l'explicabilitat, ens queda un llarg camí per aconseguir que així sigui. Tinc la impressió que estem davant una estratègia de fets consumats: els algoritmes són aquí, i no sé explicar com s'arriba a aquest resultat amb aquest algoritme.

PS. Ha arribat la intel·ligència artificial open source, la regulació i explicabilitat esdevenen encara més rellevants.





04 de maig 2022

Against black box medicine (2)

 Time to reality check the promises of machine learningowered precision medicine

Both machine learning and precision medicine are genuine innovations and will undoubtedly lead to some great scientific successes. However, these benefits currently fall short of the hype and expectation that has grown around them. Such a disconnect is not benign and risks overlooking rigour for rhetoric and inflating a bubble of hope that could irretrievably damage public trust when it bursts. Such mistakes and harm are inevitable if machine learning is mistakenly thought to bypass the need for genuine scientific expertise and scrutiny. There is no question that the appearance of big data and machine learning offer an exciting chance for revolution, but revolutions demand greater scrutiny, not less. This scrutiny should involve a reality check on the promises of machine learning-powered precision medicine and an enhanced focus on the core principles of good data science—trained experts in study design, data system design, and causal inference asking clear and important questions using high-quality data.



26 d’abril 2022

Against black box medicine

Explainable machine learning practices: opening another black box for reliable medical AI 

In regulating medical AI, we should address not only algorithmic opacity, but also on other black boxes plaguing these tools. In particular, there are many opaque choices that are made in the training process and in the way algorithmic systems are built, which can potentially impact SaMD-MLs performances, and hence their reliability. Second, we have said that opening this alternative black box means explaining the training process. This type of explanation is in part documenting the technical choices made from problem selection to model deployment, but it is also motivating those choices by being transparent about the values shaping the choices themselves—in particular, performance-centered values and ethical/social/political values. Overall, our framework can be considered as a starting point to investigate which aspects of the design of AI tools should be made explicit in medicine, in order to inform discussions on the characteristics of reliable AI tools, and how we should regulate them. We have also highlighted some limitations, and we have claimed that in the future it will be necessary to empirically investigate the practice of machine learning in light of our framework, and to identify more nuances in the values shaping ML training.

We want to end this article by repeating that the problem of explaining opaque technical choices is not an alternative to explain the opacity lying at the algorithmic level. Unlike London, we think that the worries about algorithmic opacity in medicine are more than justified. However, we leave any consideration on how the two opacities are connected to each other for future works.

Huge business interests are at  stake, who cares about citizens?



Didier Lourenço at Galeria Barnadas

24 de novembre 2021

The urgent answer to the coming black box medicine

Black box medicine and transparency

 The series of reports Black Box Medicine and Transparency examines the human interpretability of machine learning in healthcare and research:

1. Machine learning landscape considers the broad question of where machine learning is being (and will be) used in healthcare and research for health 

2. Interpretable machine learning outlines how machine learning can be or may be rendered human interpretable

3. Ethics of transparency and explanation asks why machine learning should be made transparent or be explained, drawing upon the many lessons that the philosophical literature provides

4. Regulating transparency considers if (and to what extent) does the General Data Protection Regulation (GDPR) require machine learning in the context of healthcare and research to be transparent, human interpretable, or explainable

5. Interpretability by design framework distils the findings of the previous reports, providing a framework to think through human interpretability of machine learning in the context of healthcare and health research

6. Roundtables and interviews summarises the three roundtables and eleven interviews that provided the qualitative underpinning of preceding reports 

Each report interlocks, building on the conclusions of preceding reports.

Meanwhile you can start with the executive summary. Does anybody care about it





14 d’octubre 2021

Algorithms: the underpinning of black box medicine?

 Algorithms as medical devices

The rapid growth of digital devices, software and technologies means that the medical device sector is changing. Many small and independent manufacturers are encountering medical device regulation for the first time. At the same time, responsive and effective regulation of digital devices requires sound understanding of the underlying new technologies and concepts.

Algorithms as medical devices describes how digital health is covered by existing medical device regulation and outlines three critical areas: 

The challenges that the digital health sector may pose for regulators and developers

How digital devices can be regulated as medical devices under UK/EU and US law

The specific problems that machine learning could pose to medical device regulation

 Meanwhile the market for black box medicine is growing, unless any regulator leaves their continous vacation.





01 de febrer 2019

Medicine as a data science (3)

High-performance medicine: the convergence of human and artificial intelligence

If you want to know the current state of artificial intelligence in medicine, then Eric Topol review in Nature is the article you have to read. A highlighted statement:
There are differences between the prediction metric for a cohort and an individual prediction metric. If a model’s AUC is 0.95, which most would qualify as very accurate,
this reflects how good the model is for predicting an outcome, such as death, for the overall cohort. But most models are essentially classifiers and are not capable of precise prediction at the individual level, so there is still an important dimension of uncertainty.
And this is good summary:
Despite all the promises of AI technology, there are formidable obstacles and pitfalls. The state of AI hype has far exceeded the state of AI science, especially when it pertains to validation and readiness for implementation in patient care. A recent example is IBM Watson Health’s cancer AI algorithm (known as Watson for Oncology). Used by hundreds of hospitals around the world for recommending treatments for patients with cancer, the algorithm was based on a small number of synthetic, nonreal cases with very limited input (real data) of oncologists. Many of the actual output recommendations for treatment were shown to be erroneous, such as suggesting the use of bevacizumab in a patient with severe bleeding, which represents an explicit contraindication and ‘black box’ warning for the drug. This example also highlights the potential for major harm to patients, and thus for medical malpractice, by a flawed algorithm. Instead of a single doctor’s mistake hurting a patient, the potential for a machine algorithm inducing iatrogenic risk is vast. This is all the more reason that systematic debugging, audit, extensive simulation, and validation, along with prospective scrutiny, are required when an AI algorithm is unleashed in clinical practice. It also underscores the need to require more evidence and robust validation to exceed the recent downgrading of FDA regulatory requirements for medical algorithm approval

Therefore, take care when you look at tables like this one:



PredictionnAUCPublication (Reference number)
In-hospital mortality, unplanned readmission, prolonged LOS, final discharge diagnosis216,2210.93* 0.75+0.85#Rajkomar et al.96
All-cause 3–12 month mortality221,2840.93^Avati et al.91
Readmission1,0680.78Shameer et al.106
Sepsis230,9360.67Horng et al.102
Septic shock16,2340.83Henry et al.103
Severe sepsis203,0000.85@Culliton et al.104
Clostridium difficile infection256,7320.82++Oh et al.93
Developing diseases704,587rangeMiotto et al.97
Diagnosis18,5900.96Yang et al.90
Dementia76,3670.91Cleret de Langavant et al.92
Alzheimer’s Disease ( + amyloid imaging)2730.91Mathotaarachchi et al.98
Mortality after cancer chemotherapy26,9460.94Elfiky et al.95
Disease onset for 133 conditions298,000rangeRazavian et al.105
Suicide5,5430.84Walsh et al.86
Delirium18,2230.68Wong et al.100

20 d’octubre 2015

The Theranos contretemps as a serious scandal

Last Thursday WSJ released a long article on Theranos clinical lab. In this blog you may check my February and July posts on this firm under the title: A closely guarded secret. As you may imagine, such a title was not coincidental. There were some clues that justified it, something unusual was happening. And WSJ has contributed to shed light on the issue. All the details in it. Basically, the summary is that analytic validity and clinical validity is under compromise. This is an exemple:



If you want to read a first person account, you'll find it here and here. Some additional articles: Wired, New Yorker, Clinical Chemistry and Laboratory Medicine (CCLM), Forbes, NYT, WP,...
This is not only a contretemps, it is a serious scandal and a huge problem to credibility for this start-up.
From Wired:
Theranos got a lot of traction by tapping into the frustration—both from consumers and the medical community—that diagnostic testing is too painful, too slow, and too expensive. “Their problem is they tried to do it with existing diagnostic instrumentation, instead of innovating new diagnostic instrumentation,”

Theranos is a black box that has touted results rather than process. “The ability of the lab medicine community to police and correct itself depends on that flow of information,” says Master. Instead, Theranos’ research was internal, and rather than submit their work to peer review the company cited their FDA approvals as evidence that the technology worked.
At least in the USA there is a regulator, the FDA, lab regulation in Europe was enacted in 1998, completely outdated under a third party scheme, not a direct public regulator. Therefore, there is a pressing motive to speed up new and different rules in Europe. Microfluidics and nanotechnologies are calling for and urgent overhaul.


 PS. An statement from WSJ:
In 2005, Ms. Holmes hired Ian Gibbons, a British biochemist who had researched systems to handle and process tiny quantities of fluids. His collaboration with other Theranos scientists produced 23 patents, according to records filed with the U.S. Patent and Trademark Office. Ms. Holmes is listed as a co-inventor on 19 of the patents.

The patents show how Ms. Holmes’s original idea morphed into the company’s business model. But progress was slow. Dr. Gibbons “told me nothing was working,” says his widow, Rochelle. In May 2013, Dr. Gibbons committed suicide. Theranos’s Ms. King says the scientist “was frequently absent from work in the last years of his life, due to health and other problems.” Theranos disputes the claim that its technology was failing.