CBS news explains some details:
Just because it's new doesn't mean it's better, it may be dangerous and damage you for life. Unfortunately, this is the summary.
And the book to read:
The GDPR will affect AI implementation in healthcare in several ways. First, it requires explicit and informed consent before any collection of personal data. Informed consent has been a long-standing component of medical practice (unlike in social media or onlinebased marketing), but having to obtain informed consent for an collection of data still represents a higher bar than obtaining consent for specific items, such as procedures or surgical interventions. Second, the new regulation essentially lends power to the person providing the data to track what data is being collected and to be able to request removal of their data. In the healthcare context, this will shift some of the power balance toward the patient and highlights the importance of ongoing work needed to protect patient privacy and to determine appropriate governance regarding data ownership.More details inside.
There are differences between the prediction metric for a cohort and an individual prediction metric. If a model’s AUC is 0.95, which most would qualify as very accurate,And this is good summary:
this reflects how good the model is for predicting an outcome, such as death, for the overall cohort. But most models are essentially classifiers and are not capable of precise prediction at the individual level, so there is still an important dimension of uncertainty.
Despite all the promises of AI technology, there are formidable obstacles and pitfalls. The state of AI hype has far exceeded the state of AI science, especially when it pertains to validation and readiness for implementation in patient care. A recent example is IBM Watson Health’s cancer AI algorithm (known as Watson for Oncology). Used by hundreds of hospitals around the world for recommending treatments for patients with cancer, the algorithm was based on a small number of synthetic, nonreal cases with very limited input (real data) of oncologists. Many of the actual output recommendations for treatment were shown to be erroneous, such as suggesting the use of bevacizumab in a patient with severe bleeding, which represents an explicit contraindication and ‘black box’ warning for the drug. This example also highlights the potential for major harm to patients, and thus for medical malpractice, by a flawed algorithm. Instead of a single doctor’s mistake hurting a patient, the potential for a machine algorithm inducing iatrogenic risk is vast. This is all the more reason that systematic debugging, audit, extensive simulation, and validation, along with prospective scrutiny, are required when an AI algorithm is unleashed in clinical practice. It also underscores the need to require more evidence and robust validation to exceed the recent downgrading of FDA regulatory requirements for medical algorithm approval
Prediction | n | AUC | Publication (Reference number) |
---|---|---|---|
In-hospital mortality, unplanned readmission, prolonged LOS, final discharge diagnosis | 216,221 | 0.93* 0.75+0.85# | Rajkomar et al.96 |
All-cause 3–12 month mortality | 221,284 | 0.93^ | Avati et al.91 |
Readmission | 1,068 | 0.78 | Shameer et al.106 |
Sepsis | 230,936 | 0.67 | Horng et al.102 |
Septic shock | 16,234 | 0.83 | Henry et al.103 |
Severe sepsis | 203,000 | 0.85@ | Culliton et al.104 |
Clostridium difficile infection | 256,732 | 0.82++ | Oh et al.93 |
Developing diseases | 704,587 | range | Miotto et al.97 |
Diagnosis | 18,590 | 0.96 | Yang et al.90 |
Dementia | 76,367 | 0.91 | Cleret de Langavant et al.92 |
Alzheimer’s Disease ( + amyloid imaging) | 273 | 0.91 | Mathotaarachchi et al.98 |
Mortality after cancer chemotherapy | 26,946 | 0.94 | Elfiky et al.95 |
Disease onset for 133 conditions | 298,000 | range | Razavian et al.105 |
Suicide | 5,543 | 0.84 | Walsh et al.86 |
Delirium | 18,223 | 0.68 | Wong et al.100 |
Overall, the analysis suggests that the costs of R&D and production may bear little or no relationship to how pharmaceutical companies set prices of cancer medicines. Pharmaceutical companies set prices according to their commercial goals, with a focus on extracting the maximum amount that a buyer is willing to pay for a medicine. This pricing approach often makes cancer medicines unaffordable, preventing the full benefit of the medicines from being realized.You may find here former posts on the same topic.
In the Prisoner’s Dilemma game, defecting rather than cooperating with one’s partner maximizes a player’s payoff, irrespective of what the other player does. Defecting in this game is what game theorists call a dominant strategy, and the game is extremely simple; it does not take a game theorist to figure this out. So, assuming that people care only about their own payoffs, we would predict that defection would be universal.
But when the game is played with real people, something like half of players typically cooperate rather than defect. Most subjects say that they prefer the mutual cooperation outcome over the higher material payoff they would get by defecting on a cooperator, and they are willing to take a chance that the other player feels the same way (and is willing to take the same chance.
When players defect, it is often not because they are tempted by the higher payoff that they would get, but because they know that the other player might defect, and they hate the idea that their own cooperation would be exploited by the other. We know this from what happens when the Prisoner’s Dilemma is not played simultaneously, as is standard, meaning that each person decides what to do not knowing what the other will do, but instead is played sequentially (one person chosen randomly moves first). In the sequential game, the second mover usually reciprocates the first player’s move, cooperating if the first has done so, and defecting otherwise. Keep in mind the fact that avoiding being a chump appears to be the motive here, not the prospect of a higher payoff.