Showing posts sorted by relevance for query regulatory science. Sort by date Show all posts
Showing posts sorted by relevance for query regulatory science. Sort by date Show all posts

October 16, 2010

La nova ciència regulatòria

Advancing Regulatory Science for Public Health

Building a National Framework for the Establishment of Regulatory Science for Drug Development

El títol sona estrany i pretensiós. La publicació per la FDA de l'informe "Advancing Regulatory Science for Public Health" representa una aposta per una renovació en profunditat en la "funció de producció" regulatòria. És un document a tenir en compte perquè es considera que la FDA té impacte en el 25% del consum privat dels ciutadans als Estats Units, i amb repercusió a fora, és clar. Per tant tot el que es faci per millorar la seguretat, eficàcia, qualitat i resultats dels productes sotmesos a la seva regulació será benvingut . Després de revisar el document, he de dir que em sembla tant sols una declaració d'intencions, un pla estratègic sota un altre nom. Haurem de veure la seva aplicació en què queda.
Si voleu fer-vos una idea de què vol dir ciència regulatòria és millor mirar l'informe del'IOM "Building a National Framework for the Establishment of Regulatory Science for Drug Development". És més precís i dona les pistes per on ha d'anar la qüestió en el futur.
Al nostre entorn proper no sé veure cap preocupació similar de les nostres agències.

February 12, 2015

A bit worse before it gets better

Toward Precision Medicine: Building a Knowledge Network for Biomedical Research and a New Taxonomy of Disease

A new mental frame was created some weeks ago when President Obama gave a speach on the creation of the initiative on Precision Medicine. To be honest, the term was in the title of a 2011 report by IOM.
In my opinion, it is a bundle: stratified medicine+big data+regulatory science+... This is the bundle of the new buzzword, and unless new details arise, nothing specially new.
Now the New Yorker speaks abouts the problems of precision medicine, and focuses on the risks. The final paragraph illustrates the issue:
For Solomon, genetics is simply a new tool with a learning curve, the same as any other. “When the electrocardiogram was first developed, about a hundred years ago, most physicians thought it was voodoo,” Solomon said. “Now, if you don’t understand it, then you shouldn’t be practicing medicine.” But Mary Norton sees that analogy as too simplistic. The pace of genetics research, the variability of test methods and results, and the aura of infallibility with which the tests are marketed, she told me, make this advance a more complicated one than the EKG. Norton believes that, as genetics becomes increasingly integrated into medical care, “over time everyone will come to have a better understanding of genetics.” But, as the demand for DNA testing increases, she says, “it will probably be a bit worse before it gets better.”
Could we avoid the initial bit worse of  "imprecision of stratified medicine"? . I'm full convinced that appropriate regulatory efforts could mitigate such impact. Unfortunately, governments are on vacation.

February 14, 2022

Understanding Ethics of AI

 The Oxford Handbook of ETHICS OF AI

The approach to the ethics of AI that runs through this handbook is contextual in four senses:

• it locates ethical analysis of artificial intelligence in the context of other modes of normative analysis, including legal, regulatory, philosophical, and policy approaches,

• it interrogates artificial intelligence within the context of related modes of technological innovation, including machine learning, Big Data, and robotics,

• it is interdisciplinary from the ground up, broadening the conversation about the ethics of artificial intelligence beyond computer science and related fields to include other fields of scholarly endeavor, including the social sciences, humanities,and the professions (law, medicine, engineering, etc.), and

• it invites critical analysis of all aspects of—and participants in—the wide and continuously expanding artificial intelligence complex, from production to commercialization to consumption, from technical experts to venture capitalists to self-regulating professionals to government officials to the general public.


Part I. Introduction & Overview

1. The Artificial Intelligence of Ethics of AI: An Introductory Overview

2. The Ethics of Ethics of AI: Mapping the Field

3. Ethics of AI in Context: Society & Culture

Part II. Frameworks & Modes

4. Why Industry Self-regulation Will Not Deliver 'Ethical AI': A Call for Legally Mandated Techniques of 'Human Rights by Design'

5. Private Sector AI: Ethics and Incentives

6. Normative Modes: Codes & Standards

7. Normative Modes: Professional Ethics

Part III. Concepts & Issues

8. Fairness and the Concept of 'Bias'

9. Accountability in Computer Systems

10. Transparency

11. Responsibility

12. The Concept of Handoff as a Model for Ethical Analysis and Design

13. Race and Gender

14. The Future of Work in the Age of AI: Displacement, Augmentation, or Control?

15. The Rights of Artificial Intelligences

16. The Singularity: Sobering up About Merging with AI

17. Do Sentient AIs Have Rights? If So, What Kind?

18. Autonomy

19. Troubleshooting AI and Consent

20. Is Human Judgment Necessary?

21. Sexuality

IV. Perspectives & Approaches

22. Computer Science

23. Engineering

24. Designing Robots Ethically Without Designing Ethical Robots: A Perspective from Cognitive Science

25. Economics

26. Statistics

27. Automating Origination: Perspectives from the Humanities

28. Philosophy

29. The Complexity of Otherness: Anthropological contributions to robots and AI

30. Calculative Composition: The Ethics of Automating Design

31. Global South

32. East Asia

33. Artificial Intelligence and Inequality in the Middle East: The Political Economy of Inclusion

34. Europe's struggle to set global AI standards

Part V. Cases & Applications

35. The Ethics of Artificial Intelligence in Transportation

36. Military

37. The Ethics of AI in Biomedical Research, Medicine and Public Health

38. Law: Basic Questions

39. Law: Criminal Law

40. Law: Public Law & Policy: Notice, Predictability, and Due Process

41. Law: Immigration & Refugee Law

42. Education

43. Algorithms and the Social Organization of Work

44. Smart City Ethics

February 27, 2015

A closely guarded secret

Stealth Research. Is Biomedical Innovation Happening Outside the Peer-Reviewed Literature?

How can we identify a snake-oil seller?. Not so easy. Have a look  at JAMA, John Ioannidis article shows his concerns about Theranos, a company that is providing lab services with a new propietary technology that has no peer-review article in any scientific publication. Nobody can check tests sensibility and specifity, no external quality controls, and so on.
If this is the path for the future of health care provision, then I am really concerned because it will be a complete disaster. No consumer protection, no regulation, uncertain science and more uncertain outcomes. After all this years, is this what citizens deserve?.
Such style of "laissez-faire, laissez-passer" medicine could represent huge profits for some and a big loss for everyone.
Otherwise some alternative should be proposed to boost publication and transparency. The author's suggestion is the following one:
To solve this conundrum, it may be necessary to find ways to realign the reward system for innovation. One possibility is to make the scientific literature more receptive to innovators. This could include models in which reports of disruptive discoveries that are in dissonance with the mainstream can still be communicated as preprints without prior peer review, perhaps in the same way as the successful example of arXiv in the physical sciences, which has now reached 1 million e-print articles. That there has been no peer review of these initial reports should be transparent to researchers and the public.
Thus, some better regulatory process is needed so that innovative ideas for financially successful applications can be scrutinized by the wider scientific community as to their validity. A company should not be forced to disclose its science secrets in detail, especially while its efforts are still exploratory rial-and error and while creating basic elements for its products and services. However, if a product or service reaches the point at which it generates substantial revenue, the science behind it should then be communicated in detail to ensure adequate review.

August 19, 2018

Regulatory capture (once again)

Nowadays, news on pharma industry capturing the regulator are not new. When everybody thought that some laws would provide a new ethical framework to avoid such capture, the result is exactly the opposite. It has improved and increased its sophistication. Have a look at Science and you'll understand what I'm saying.
An analysis of pharma payments to 107 physicians who advised FDA on 28 drugs approved from 2008 to 2014 found that a majority later got money for travel or consulting, or received research subsidies, from the makers of the drugs on which they voted or from competing firms.
 Of the more than $26 million in personal payments or research support from industry to the 17 top-earning advisers—who received more than $300,000 each—94% came from the makers of drugs those advisers previously reviewed or from competitors
 Through web searches and online services such as LinkedIn, however, Science has discovered that 11 of 16 FDA medical examiners who worked on 28 drug approvals and then left the agency for new jobs are now employed by or consult for the companies they recently regulated.
Definitely, the regulatory system is broken.

February 13, 2020

Germline genome editing under scrutiny

Societal and Ethical Impacts of Germline Genome Editing: How Can We Secure Human Rights?

Geneva Statement on Heritable Human Genome Editing: The Need for Course Correction

A CRISPR Moratorium Isn't Enough: We Need a Boycott

The Human Right to Science and the Regulation of Human Germline Engineering

The last frontier in genome editing (if it exists) is germline. The special issue of The Crispr journal on bioethics contains an article of special interest and proposes a third process for evaluating individual and societal harms: a Human Rights Impact Assessment.

Human germline alteration is possible, due in part to democratization of genetic tools required for genome editing, and international scientific and legislative bodies are developing frameworks to manage the ramifications of this technology. Common among these frameworks are two pillars: public engagement and foundational principles. These components are necessary for respecting the autonomy of individuals and for fair processes and respecting diverse values.
However, they are not sufficient for protecting the most vulnerable members of society who may not even be in a position to participate in democratic processes. We propose implementing a HRIA, which captures concerns of public health and offers an opportunity to evaluate and anticipate the societal impact of GGE iteratively as the technology advances, public sentiments evolve, and cultural contexts shift. We recognize that this will raise new challenges of how such assessments are shared and implemented and how they can be enforced. We urge regulatory bodies and policy makers to consider this assessment approach in helping to establish robust regulatory frameworks necessary for the global protection of human rights.
And the Geneva Statement on Heritable Human Genome Editing says:
No decision about whether to pursue heritable human genome modification can be legitimate without broadly inclusive and substantively meaningful public engagement and empowerment. Such deliberations may be challenging and messy. They will take time and organizing them will necessitate creativity, hard work, and significant human and financial resources. The course correction proposed here is essential to these efforts.
We must in the meantime respect the predominant policy position against pursuing heritable human genome modification, if we are to prevent individual scientists or small committees from making this momentous decision for us all. This will preserve time to cultivate an informed and engaged public that can consider and discuss the societal consequences of altering the genes of future generations and make wise, democratic decisions about the shared future we aspire to build. 
I agree.

PS. CRISPR in 2020  Two major reports on germline editing, from the National Academies/Royal Society and the World Health Organization, will be released in 2020. We hope the reports will coordinate, with all the voices of CRISPR being heard, so we can build consensual and broadly acceptable frameworks to ensure we use CRISPR responsibly, especially regarding usage in human embryos for germline editing. The public has asked for it, and the community has been working on it. The science versus society gap will be bridged.

February 22, 2019

The bioethics of machine clinical decision making

Artificial intelligence (AI) in healthcare and research
Regulation of predictive analytics in medicine

This is what a brief note from Nuffield Council of Bioethics says about artificial intelligence in healthcare:
The use of AI raises ethical issues, including:
  • the potential for AI to make erroneous decisions; 
  • the question of who is responsible when AI is used to support decision-making; 
  • difficulties in validating the outputs of AI systems; inherent biases in the data used to train AI systems; 
  • ensuring the protection of potentially sensitive data; 
  • securing public trust in the development and use of AI; 
  • effects on people’s sense of dignity and social isolation in care situations; 
  • effects on the roles and skill-requirements of healthcare professionals; 
  • and the potential for AI to be used for malicious purposes.
A key challenge will be ensuring that AI is developed and used in a way that is transparent and compatible with the public interest, whilst stimulating and driving innovation in the sector.
This statement is naive.(From m-w, naive:  marked by unaffected simplicity : INGENUOUS). Up to now, have you seen any transparent algorithm available for imaging, triage or any medical app? For sure not. Therefore, the real key challenge is to stop introducing such algorithms -to ban apps- unless there is a regulatory body that takes into account the quality assurance or effectiveness side (sensitivity and specificity) and the required transparency for citizens.
Until now Nuffield has released only a brief. Let's wait for the report.
If you want a quick answer, check Science this week:
To unlock the potential of advanced analytics while protecting patient safety, regulatory and professional bodies should ensure that advanced algorithms meet accepted standards of clinical benefit, just as they do for clinical therapeutics and predictive biomarkers. External validation and prospective testing of advanced algorithms are clearly needed
 They explain the five standards and give rules and criteria for regulation. It is really welcome.

February 1, 2019

Medicine as a data science (3)

High-performance medicine: the convergence of human and artificial intelligence

If you want to know the current state of artificial intelligence in medicine, then Eric Topol review in Nature is the article you have to read. A highlighted statement:
There are differences between the prediction metric for a cohort and an individual prediction metric. If a model’s AUC is 0.95, which most would qualify as very accurate,
this reflects how good the model is for predicting an outcome, such as death, for the overall cohort. But most models are essentially classifiers and are not capable of precise prediction at the individual level, so there is still an important dimension of uncertainty.
And this is good summary:
Despite all the promises of AI technology, there are formidable obstacles and pitfalls. The state of AI hype has far exceeded the state of AI science, especially when it pertains to validation and readiness for implementation in patient care. A recent example is IBM Watson Health’s cancer AI algorithm (known as Watson for Oncology). Used by hundreds of hospitals around the world for recommending treatments for patients with cancer, the algorithm was based on a small number of synthetic, nonreal cases with very limited input (real data) of oncologists. Many of the actual output recommendations for treatment were shown to be erroneous, such as suggesting the use of bevacizumab in a patient with severe bleeding, which represents an explicit contraindication and ‘black box’ warning for the drug. This example also highlights the potential for major harm to patients, and thus for medical malpractice, by a flawed algorithm. Instead of a single doctor’s mistake hurting a patient, the potential for a machine algorithm inducing iatrogenic risk is vast. This is all the more reason that systematic debugging, audit, extensive simulation, and validation, along with prospective scrutiny, are required when an AI algorithm is unleashed in clinical practice. It also underscores the need to require more evidence and robust validation to exceed the recent downgrading of FDA regulatory requirements for medical algorithm approval

Therefore, take care when you look at tables like this one:

PredictionnAUCPublication (Reference number)
In-hospital mortality, unplanned readmission, prolonged LOS, final discharge diagnosis216,2210.93* 0.75+0.85#Rajkomar et al.96
All-cause 3–12 month mortality221,2840.93^Avati et al.91
Readmission1,0680.78Shameer et al.106
Sepsis230,9360.67Horng et al.102
Septic shock16,2340.83Henry et al.103
Severe sepsis203,0000.85@Culliton et al.104
Clostridium difficile infection256,7320.82++Oh et al.93
Developing diseases704,587rangeMiotto et al.97
Diagnosis18,5900.96Yang et al.90
Dementia76,3670.91Cleret de Langavant et al.92
Alzheimer’s Disease ( + amyloid imaging)2730.91Mathotaarachchi et al.98
Mortality after cancer chemotherapy26,9460.94Elfiky et al.95
Disease onset for 133 conditions298,000rangeRazavian et al.105
Suicide5,5430.84Walsh et al.86
Delirium18,2230.68Wong et al.100

July 20, 2017

Precision medicine: a deep breakthrough in life sciences paradigm

Bioscience - Lost in Translation? How precision medicine closes the innovation gap

It is not so easy to translate knowledge into practice, and this is the case of biosciences into clinical applications. However, recently this trend is accelerating and precision medicine is emerging. A new book gives us the highlights to understand precisely what's going on: Bioscience - Lost in Translation? How precision medicine closes the innovation gap.

Richard Barker (the author of 2030 - The future of medicine) says:
The classic definition of diseases has been in terms of the symptoms they cause and/ or where in the body they appear. This was the best that medicine could do when external observation of the patient was the only or primary means of diagnosing disease. The  powerful new tools of molecular biology are reinterpreting disease in terms of aberrant,
defective, or unbalanced molecular mechanisms at the cellular, organ, or organism level. Molecular level diagnosis becomes a real possibility. Such an approach brings effective therapy immediately closer. Molecular diagnostics can separate diseases with similar symptoms but different underlying causes— and often suggest a different starting point for intervention.
If this is so, what should we do?
The seven changes of mindset and of practice are:
1. Advance the molecular definition of disease and the application of systems biology. We need a more decisive move from a classic definition of diseases— in terms of the symptoms they cause and/ or where in the body they appear— to a definition in terms of aberrant, defective, or unbalanced molecular mechanisms at the cellular level. And we need to marry this with a recognition that singular target- based innovation rarely works: we need a systems biology approach.
2. Partner academia and industry in more collaborative, impact- oriented research. We need to extend the ‘open innovation’ approach in which academia and companies invest together and share IP. We need to define new pre- or non- competitive spaces, especially in work on disease mechanisms and disease models. And we need to provide for new types of links and incentives to break down the barriers between these two worlds. 
3. Move decisively to a more adaptive approach to development, trial and approval design. We need to build on successful experiments in more flexible trial design, development pathways, and regulatory appraisal to a globally accepted adaptive approach. This involves collaborative design of the evidence package needed to secure approval and reimbursement, and greater teamwork through the process. 
4. Create new reward and financing vehicles for leading edge innovation. We need to move from reward systems based purely on unit sales of products, irrespective of outcome, to rewarding innovators for positive outcomes, patient by patient. We also need to design financing mechanisms that bridge between cost- effectiveness and affordability. We must be able to accommodate high- cost precision therapies that offer cures and so generate long- term returns for the system.
5. Engineer tools and systems for faster and better innovation adoption and adherence. We need to move from reliance solely on promotion to doctors and passive patient participation to a disciplined approach to establishing new pathways of care. These will be based on modern behavioural science, clinical decision support, and other digital technologies.
6. Develop an infrastructure for real- world data- driven learning. We now have the opportunity to study in large populations how lifestyle and treatment choices lead
to outcomes, learning from every patient as if in a clinical trial. New analytical tools will empower this.
7. Bring patients into the mainstream of decision- making and engage them  hole heartedly throughout the process. It is time to move from a process and mindset in which patients are regarded as passive subjects for clinical trials and recipients of products and procedures. Their input and engagement needs to be sought along the whole innovation chain: on treatment benefits, acceptable risks, optimal clinical trial design, adherence support, and outcomes.

Highly recommended.

December 19, 2019

Medicine as a data science (7)

Artificial Intelligence in Health Care: The Hope, the Hype, the Promise, the Peril

The National Academy of Medicine’s Special Publication: Artificial Intelligence in Health Care: The Hope, The Hype, The Promise, The Peril synthesizes current knowledge to offer a reference document for relevant health care stakeholders such as: AI model developers, clinical implementers, clinicians and patients, regulators, and policy makers, to name a few. It outlines the current and near-term AI solutions; highlights the challenges, limitations, and best practices for AI development, adoption, and maintenance; offers an overview of the legal and regulatory landscape for AI tools designed for health care application; prioritizes the need for equity, inclusion, and a human rights lens for this work; and outlines key considerations for moving forward.
A must read

March 2, 2015

Beyond the genome

FORUM Epigenomics. Roadmap for regulation. Diseases mapped

My suggestion for today. Have a look at the papers in Nature on epigenome, and at the following figure:

The Roadmap Epigenomics Project has produced reference epigenomes that provide information on key functional elements controlling gene expression in 127 human tissues and cell types, and encompassing embryonic and adult tissues, from healthy individuals and those with disease. a, Many of the adult tissues investigated were broken down by cell type or region — blood into several types of immune cell, for instance, and the brain into regions including the hippocampus and dorsolateral prefrontal cortex. Tissue samples and cells were subjected to a range of epigenomic analyses, along with genome sequencing and genome-wide association studies (GWAS). b, Embryonic stem (ES) cells, which are taken from the embryo at the 'blastocyst' stage and can give rise to almost every cell type in the body, were used to analyse, for example, the differentiation of stem cells into different neuronal lineages. The ES-cell-derived cell lines underwent the same epigenomic analyses as the tissue samples.

The key article, here.Tissues and cell types profiled:

For decades, biomedical science has focused on ways of identifying the genes that contribute to a particular trait, or phenotype. Approaches such as genome-wide association studies (GWAS) identify locations in thhuman genome at which variations in DNA sequence are linked to specific phenotypes, but if the variant is located in a region of DNA that does not encode a protein, such studies rarely provide insights into the regulatory mechanisms underlying the association. In these cases, comprehensive epigenomic analyses can provide the missing link between genomic variation and cellular phenotype.

If this is so, why are governments reluctant to introduce a ban on genetic tests with spurious associations between genome and diseases?

PS. Manel Esteller in DM.

August 20, 2018

Population-based genomic medicine in an integrated learning health care system

Patient-Centered Precision Health In A Learning Health Care System:Geisinger’s Genomic Medicine Experience
The Path to Routine Genomic Screening in Health Care
Medicine's future

If you want to know a latest development on the implementation of precision health, then Geisinger Health System is the place you have to go. And the Health Affairs article explains the details about the initiative and MyCode biorepository.
In 2014 the MyCode initiative began to conduct whole exome sequencing and  genotyping on collected samples, as part of a collaboration with Regeneron Pharmaceuticals and the Regeneron Genetics Center.12 Whole exome sequencing analyzes genes that code for proteins and associated gene regulatory areas—about 1–2 percent of the whole genome containing the most clinically relevant information. To date, nearly 93,000 exome sequences have been completed.
 The rapidly changing knowledge about gene-disease associations requires a process to reanalyze previously analyzed sequences and incorporate new knowledge about variants’ pathogenicity. Approximately 3.5 percent of participants have a reportable variant. As of January 2018, results had been reported to over 500 MyCode patient participants.
Interesting article, a private initiative of public interest. More info in: Science and Annals

March 31, 2017

Paying the bill of gene therapy

GENE THERAPY: Understanding the Science, Assessing the Evidence, and Paying for Value

Approximately 12-14 investigational gene therapies for additional ultra-rare conditions and some for more common conditions, such as haemophilia and sickle cell disease, are progressing through the developmental pathway and are expected to reach regulatory approval within the next 2-3 years
These therapies rely mostly on viral vector techniques, therefore they don't take into account the coming genome editing, the most disruptive one and the most recent as well. If this new technologies reach the market, how should be paid and applied?. This is what a recent report explains and gives details for decision makers. It is really welcome, the issue deserves a deeper understanding.
Situation in Europe
Glybera and Strimvelis, have been granted marketing authorization in the European Union by the European Medicines Agency (EMA):
- Glybera was approved by the EMA in 2012, but has since become the world’s most expensive short-term treatment (Adams, 2016), and as such has not been widely successful - it has only been used by one patient, with the prescribing clinician overcoming steep bureaucratic hurdles to obtain insurer funding (Abou-El-Enein et al., 2016a).
- Strimvelis received marketing authorization in 2016. Patients can currently only be treated in Milan, due to the treatment’s extremely short shelf life which dictates that cells must be infused back into the patient in less than six hours.
More efforts should be devoted to understand this emerging market and assess its value.

Caro Emerald