9 research outputs found

    HyperProbe consortium: innovate tumour neurosurgery with innovative photonic solutions

    Get PDF
    Recent advancements in imaging technologies (MRI, PET, CT, among others) have significantly improved clinical localisation of lesions of the central nervous system (CNS) before surgery, making possible for neurosurgeons to plan and navigate away from functional brain locations when removing tumours, such as gliomas. However, neuronavigation in the surgical management of brain tumours remains a significant challenge, due to the inability to maintain accurate spatial information of pathological and healthy locations intraoperatively. To answer this challenge, the HyperProbe consortium have been put together, consisting of a team of engineers, physicists, data scientists and neurosurgeons, to develop an innovative, all-optical, intraoperative imaging system based on (i) hyperspectral imaging (HSI) for rapid, multiwavelength spectral acquisition, and (ii) artificial intelligence (AI) for image reconstruction, morpho-chemical characterisation and molecular fingerprint recognition. Our HyperProbe system will (1) map, monitor and quantify biomolecules of interest in cerebral physiology; (2) be handheld, cost-effective and user-friendly; (3) apply AI-based methods for the reconstruction of the hyperspectral images, the analysis of the spatio-spectral data and the development and quantification of novel biomarkers for identification of glioma and differentiation from functional brain tissue. HyperProbe will be validated and optimised with studies in optical phantoms, in vivo against gold standard modalities in neuronavigational imaging, and finally we will provide proof of principle of its performances during routine brain tumour surgery on patients. HyperProbe aims at providing functional and structural information on biomarkers of interest that is currently missing during neuro-oncological interventions

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    MicroRNA-326 acts as a molecular switch in the regulation of midbrain urocortin 1 expression

    Get PDF
    Contains fulltext : 167946.pdf (publisher's version ) (Closed access)BACKGROUND: Altered levels of urocortin 1 (Ucn1) in the centrally projecting Edinger-Westphal nucleus (EWcp) of depressed suicide attempters or completers mediate the brain's response to stress, while the mechanism regulating Ucn1 expression is unknown. We tested the hypothesis that microRNAs (miRNAs), which are vital fine-tuners of gene expression during the brain's response to stress, have the capacity to modulate Ucn1 expression. METHODS: Computational analysis revealed that the Ucn1 3' untranslated region contained a conserved binding site for miR-326. We examined miR-326 and Ucn1 levels in the EWcp of depressed suicide completers. In addition, we evaluated miR-326 and Ucn1 levels in the serum and the EWcp of a chronic variable mild stress (CVMS) rat model of behavioural despair and after recovery from CVMS, respectively. Gain and loss of miR-326 function experiments examined the regulation of Ucn1 by this miRNA in cultured midbrain neurons. RESULTS: We found reduced miR-326 levels concomitant with elevated Ucn1 levels in the EWcp of depressed suicide completers as well as in the EWcp of CVMS rats. In CVMS rats fully recovered from stress, both serum and EWcp miR-326 levels rebounded to nonstressed levels. While downregulation of miR-326 levels in primary midbrain neurons enhanced Ucn1 expression levels, miR-326 overexpression selectively reduced the levels of this neuropeptide. LIMITATIONS: This study lacked experiments showing that in vivo alteration of miR-326 levels alleviate depression-like behaviours. We show only correlative data for miR-325 and cocaine- and amphetamine-regulated transcript levels in the EWcp. CONCLUSION: We identified miR-326 dysregulation in depressed suicide completers and characterized this miRNA as an upstream regulator of the Ucn1 neuropeptide expression in midbrain neurons
    corecore