55 research outputs found

    Harnessing case isolation and ring vaccination to control Ebola.

    Get PDF
    As a devastating Ebola outbreak in West Africa continues, non-pharmaceutical control measures including contact tracing, quarantine, and case isolation are being implemented. In addition, public health agencies are scaling up efforts to test and deploy candidate vaccines. Given the experimental nature and limited initial supplies of vaccines, a mass vaccination campaign might not be feasible. However, ring vaccination of likely case contacts could provide an effective alternative in distributing the vaccine. To evaluate ring vaccination as a strategy for eliminating Ebola, we developed a pair approximation model of Ebola transmission, parameterized by confirmed incidence data from June 2014 to January 2015 in Liberia and Sierra Leone. Our results suggest that if a combined intervention of case isolation and ring vaccination had been initiated in the early fall of 2014, up to an additional 126 cases in Liberia and 560 cases in Sierra Leone could have been averted beyond case isolation alone. The marginal benefit of ring vaccination is predicted to be greatest in settings where there are more contacts per individual, greater clustering among individuals, when contact tracing has low efficacy or vaccination confers post-exposure protection. In such settings, ring vaccination can avert up to an additional 8% of Ebola cases. Accordingly, ring vaccination is predicted to offer a moderately beneficial supplement to ongoing non-pharmaceutical Ebola control efforts

    The GPCR-gαs-PKA Signaling Axis Promotes T Cell Dysfunction and Cancer Immunotherapy Failure

    Get PDF
    Immune checkpoint blockade (ICB) targeting PD-1 and CTLA-4 has revolutionized cancer treatment. However, many cancers do not respond to ICB, prompting the search for additional strategies to achieve durable responses. G-protein-coupled receptors (GPCRs) are the most intensively studied drug targets but are underexplored in immuno-oncology. Here, we cross-integrated large singe-cell RNA-sequencing datasets from CD8+ T cells covering 19 distinct cancer types and identified an enrichment of Gαs-coupled GPCRs on exhausted CD8+ T cells. These include EP2, EP4, A2AR, β1AR and β2AR, all of which promote T cell dysfunction. We also developed transgenic mice expressing a chemogenetic CD8-restricted Gαs–DREADD to activate CD8-restricted Gαs signaling and show that a Gαs–PKA signaling axis promotes CD8+ T cell dysfunction and immunotherapy failure. These data indicate that Gαs–GPCRs are druggable immune checkpoints that might be targeted to enhance the response to ICB immunotherapies

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Political Language and Trust : A Study in Machiavelli and Hobbes

    Get PDF
    It is not possible to discuss politics without assuming a degree, however minimal, of trust. To deny the existence of any trust at all is to assume, as Hobbes does, a war, whether hot or cold, of all against all. While it is possible for people to live in such a condition, they cannot live together in any sense that can be called political. Even in societies which are divided into hostile groups, people either align themselves with a group, expressing their loyalty to its goals. Alternatively, they may withdraw from the political sphere, rejecting it completely, or wait until a stable government emerges which they may trust. If government is based entirely on force, it is, as Burke points out, not governing, but subduing. For others, who focus not on force as it may supplant trust, but on a commonly shared vision of politics, trust may almost seem to be a negative starting point because it is something that many political thinkers accept as a given. To speak of trust is to call into question all of the other possibilities that politics might offer, as it is a precondition for them

    Environmental risk in Indian country

    Full text link
    Master of ScienceNatural Resources and EnvironmentUniversity of Michiganhttp://deepblue.lib.umich.edu/bitstream/2027.42/115672/1/39015025170930.pd
    • …
    corecore