71 research outputs found

    Neural correlates of error detection during complex response selection: Introduction of a novel eight-alternative response task

    Get PDF
    Error processing in complex decision tasks should be more difficult compared to a simple and commonly used two-choice task. We developed an eight-alternative response task (BART), which allowed us to investigate different aspects of error detection. We analysed event-related potentials (ERP; N = 30). Interestingly, the response time moderated several findings. For example, only for fast responses, we observed the well-known effect of larger error negativity (N-e) in signalled and non-signalled errors compared to correct responses, but not for slow responses. We identified at least two different error sources due to post-experimental reports and certainty ratings: impulsive (fast) errors and (slow) memory errors. Interestingly, the participants were able to perform the task and to identify both, impulsive and memory errors successfully. Preliminary evidence indicated that early (N-e-related) error processing was not sensitive to memory errors but to impulsive errors, whereas the error positivity seemed to be sensitive to both error types

    Determinants of participation in a web-based health risk assessment and consequences for health promotion programs

    Get PDF
    Background: The health risk assessment (HRA) is a type of health promotion program frequently offered at the workplace. Insight into the underlying determinants of participation is needed to evaluate and implement these interventions. Objective: To analyze whether individual characteristics including demographics, health behavior, self-rated health, and work-related factors are associated with participation and nonparticipation in a Web-based HRA. Methods: Determinants of participation and nonparticipation were investigated in a cross-sectional study among individuals employed at five Dutch organizations. Multivariate logistic regression was performed to identify determinants of participation and nonparticipation in the HRA after controlling for organization and all other variables. Results: Of the 8431 employees who were invited, 31.9% (2686/8431) enrolled in the HRA. The online questionnaire was completed by 27.2% (1564/5745) of the nonparticipants. Determinants of participation were some periods of stress at home or work in the preceding year (OR 1.62, 95% CI 1.08-2.42), a decreasing number of weekdays on which at least 30 minutes were spent on moderate to vigorous physical activity (ORdayPA0.84, 95% CI 0.79-0.90), and increasing alcohol consumption. Determinants of nonparticipation were less-than-positive self-rated health (poor/very poor vs very good, OR 0.25, 95% CI 0.08-0.81) and tobacco use (at least weekly vs none, OR 0.65, 95% CI 0.46-0.90). Conclusions: This study showed that with regard to isolated health behaviors (insufficient physical activity, excess alcohol consumption, and stress), those who could benefit most from the HRA were more likely to participate. However, tobacco users and those who rate

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Antarctic ice sheet sensitivity to atmospheric CO2 variations in the early to mid-Miocene

    Get PDF
    Geological records from the Antarctic margin offer direct evidence of environmental variability at high southern latitudes and provide insight regarding ice sheet sensitivity to past climate change. The early to mid-Miocene (23-14 Mya) is a compelling interval to study as global temperatures and atmospheric CO2 concentrations were similar to those projected for coming centuries. Importantly, this time interval includes the Miocene Climatic Optimum, a period of global warmth during which average surface temperatures were 3-4 °C higher than today. Miocene sediments in the ANDRILL-2A drill core from the Western Ross Sea, Antarctica, indicate that the Antarctic ice sheet (AIS) was highly variable through this key time interval. A multiproxy dataset derived from the core identifies four distinct environmental motifs based on changes in sedimentary facies, fossil assemblages, geochemistry, and paleotemperature. Four major disconformities in the drill core coincide with regional seismic discontinuities and reflect transient expansion of grounded ice across the Ross Sea. They correlate with major positive shifts in benthic oxygen isotope records and generally coincide with intervals when atmospheric CO2 concentrations were at or below preindustrial levels (∼280 ppm). Five intervals reflect ice sheet minima and air temperatures warm enough for substantial ice mass loss during episodes of high (∼500 ppm) atmospheric CO2. These new drill core data and associated ice sheet modeling experiments indicate that polar climate and the AIS were highly sensitive to relatively small changes in atmospheric CO2 during the early to mid-Miocene
    corecore