Collective action for responsible AI in health
AI IN HEALTH: HUGE POTENTIAL, HUGE RISKS
Repasso el darrer informe de l'OCDE sobre Intel·ligència Artificial i Salut. Veig que hi ha un bon resum de barreres, riscos, oportunitats i resultats potencials. Fa temps que parlem del mateix. Ara bé no veig una explicació clara de que cal fer col·lectivament per reduir els riscos.
En destacaré un risc molt important, que la IA contribueixi a crear una medicina de "caixa negra". Diu:
AI solutions may not be explainable, impacting accountable evidence-based decision-making: The “black box” nature of AI algorithms can lead to difficulty in understanding the rationale behind specific AI driven outputs. This difficulty in understanding can grow to a lack of trust in solutions when coupled with the risk of AI solutions being trained on biased data. While it is difficult to fully articulate the underlying mathematics in an easy-to-consume manner, it is important to develop guidance for explainability of AI solutions to ensure that sufficient information is provided to establish trust in outcomes. Where appropriate, sufficient transparency should be provided to both the users of AI algorithms (e.g. health providers) as well as those impacted by its outcomes (e.g. patients). This should be communicated in language that is appropriate and consumable for the target audience while respecting intellectual property and preventing breaches of privacy. Transparency into the demographic data used in AI models will allow AI users to evaluate the appropriateness of the model in a given clinical context.
Aquesta observació l'he feta també en escrits anteriors però no hi ha resposta clara sobre com reduir aquest risc, només algunes idees elementals, i el temps va passant i la IA es va implantant. L'informe, malgrat això, és un bon resum a tenir en compte.