Regulation of predictive analytics in medicine
This is what a brief note from Nuffield Council of Bioethics says about artificial intelligence in healthcare:
The use of AI raises ethical issues, including:This statement is naive.(From m-w, naive: marked by unaffected simplicity : INGENUOUS). Up to now, have you seen any transparent algorithm available for imaging, triage or any medical app? For sure not. Therefore, the real key challenge is to stop introducing such algorithms -to ban apps- unless there is a regulatory body that takes into account the quality assurance or effectiveness side (sensitivity and specificity) and the required transparency for citizens.
- the potential for AI to make erroneous decisions;
- the question of who is responsible when AI is used to support decision-making;
- difficulties in validating the outputs of AI systems; inherent biases in the data used to train AI systems;
- ensuring the protection of potentially sensitive data;
- securing public trust in the development and use of AI;
- effects on people’s sense of dignity and social isolation in care situations;
- effects on the roles and skill-requirements of healthcare professionals;
A key challenge will be ensuring that AI is developed and used in a way that is transparent and compatible with the public interest, whilst stimulating and driving innovation in the sector.
- and the potential for AI to be used for malicious purposes.
Until now Nuffield has released only a brief. Let's wait for the report.
If you want a quick answer, check Science this week:
To unlock the potential of advanced analytics while protecting patient safety, regulatory and professional bodies should ensure that advanced algorithms meet accepted standards of clinical benefit, just as they do for clinical therapeutics and predictive biomarkers. External validation and prospective testing of advanced algorithms are clearly neededThey explain the five standards and give rules and criteria for regulation. It is really welcome.