09 de març 2021

Regulating Artificial Intelligence as a medical device

 The need for a system view to regulate artificial intelligence/machine learning-based software as medical device

The starting point:

AI/ML-based SaMD raise new challenges for regulators. As compared to typical drugs and medical devices, we argue that due to their systemic aspects, AI/ML-based SaMD will present more variance between performance in the artificial testing environment and in actual practice settings, and thus potentially more risks and less certainty over their benefits. Variance can increase due to human factors or the complexity of these systems and how they interact with their environment. Unlike drugs, the usage of software and generally Information Technologies (IT) is known to be highly affected by organizational factors such as resources, staffing, skills, training, culture, workflow, and processes (e.g., regarding data quality management)8. There is no reason to expect that the adoption and impact of AI/ML-based SaMD will be consistent, or even improve performance, across all settings.

on unlocked and adaptive algorithms,

 All AI/ML-based SaMD that the FDA has thus far reviewed have been cleared or approved as “locked” algorithms, which it defines as “an algorithm that provides the same result each time the same input is applied to it and does not change with use”. The agency is currently developing a strategy for how to regulate “unlocked” or “adaptive” AI/ML algorithms—algorithms that may change as they are applied to new data.

Therefore,

 AI/ML-based SaMD pose new safety challenges for regulators. They need to make a difficult choice: either largely ignore systemic and human factor issues with each approval and subsequent update or require the maker to conduct significant organizational and human factors validation testing with each update resulting in increased cost and time, which may, in turn, chill the desire of the maker to engage in potentially very beneficial innovations or possible updates.