15 research outputs found

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Phase II study of carfilzomib, thalidomide, and low-dose dexamethasone as induction and consolidation in newly diagnosed, transplant eligible patients with multiple myeloma; the Carthadex trial

    No full text
    This is a phase II dose escalation trial of carfilzomib in combination with thalidomide and dexamethasone for induction and consolidation in transplant-eligible patients with newly diagnosed multiple myeloma (NDMM). The results of four dose levels are reported. Induction therapy consisted of four cycles of carfilzomib 20/27 mg/m2 (n=50), 20/36 mg/m2 (n=20), 20/45 mg/m2 (n=21), and 20/56 mg/m2 (n=20) on days 1, 2, 8, 9, 15, 16 of a 28-day cycle; thalidomide 200 mg on day 1 through 28 and dexamethasone 40 mg weekly. Induction therapy was followed by high-dose melphalan and autologous stem cell transplantation and consolidation therapy with four cycles of carfilzomib, thalidomide and dexamethasone in the same schedule except a lower dose of thalidomide (50 mg). Very good partial response rate or better and complete response rate or better after induction therapy were 65% and 18%, respectively, increasing to 86% and 63%, respectively, after consolidation therapy. In all cohorts combined, after a median follow up of 58.7 months, median progression-free survival was 58 months (95%CI: 45-67 months). Median overall survival was 83 months (95%CI: 83 months-not reached). Grade 3/4 adverse events consisted mainly of infections, respiratory disorders, skin and vascular disorders in 11%, 8%, 9%, and 9%, respectively. Grade 3 polyneuropathy was only reported in one patient. Cardiac events were limited: grade 3/4 in 5% of patients. Carfilzomib, thalidomide and dexamethasone as induction and consolidation treatment after high-dose melphalan and autologous stem cell transplantation is highly efficacious and safe in transplant-eligible patients with NDMM. This study was registered as #NTR2422 at http://www.trialregister.n

    Phase II study of carfilzomib, thalidomide, and low-dose dexamethasone as induction and consolidation in newly diagnosed, transplant eligible patients with multiple myeloma; the Carthadex trial

    No full text
    This is a phase II dose escalation trial of carfilzomib in combination with thalidomide and dexamethasone for induction and consolidation in transplant-eligible patients with newly diagnosed multiple myeloma (NDMM). The results of four dose levels are reported. Induction therapy consisted of four cycles of carfilzomib 20/27 mg/m2 (n=50), 20/36 mg/m2 (n=20), 20/45 mg/m2 (n=21), and 20/56 mg/m2 (n=20) on days 1, 2, 8, 9, 15, 16 of a 28-day cycle; thalidomide 200 mg on day 1 through 28 and dexamethasone 40 mg weekly. Induction therapy was followed by high-dose melphalan and autologous stem cell transplantation and consolidation therapy with four cycles of carfilzomib, thalidomide and dexamethasone in the same schedule except a lower dose of thalidomide (50 mg). Very good partial response rate or better and complete response rate or better after induction therapy were 65% and 18%, respectively, increasing to 86% and 63%, respectively, after consolidation therapy. In all cohorts combined, after a median follow up of 58.7 months, median progression-free survival was 58 months (95%CI: 45-67 months). Median overall survival was 83 months (95%CI: 83 months-not reached). Grade 3/4 adverse events consisted mainly of infections, respiratory disorders, skin and vascular disorders in 11%, 8%, 9%, and 9%, respectively. Grade 3 polyneuropathy was only reported in one patient. Cardiac events were limited: Grade 3/4 in 5% of patients. Carfilzomib, thalidomide and dexamethasone as induction and consolidation treatment after high-dose melphalan and autologous stem cell transplantation is highly efficacious and safe in transplant-eligible patients with NDMM
    corecore