21 research outputs found

    A Micro-Level Analysis of Vulnerability to Climate Change by Smallholder Farmers in Semi-Arid Areas of Zimbabwe

    Get PDF
    Using household survey data from a random sample of 180 households in Gweru and Lupane district, we found the distribution of vulnerability among households was skewed with mean 0.76. On average 89% of the households had a probability of more than 0.5 making them vulnerable to food insecurity and 11% were not vulnerable to food insecurity. The gender of household head, farming experience, household income, and livestock ownership had strong influence on household cereal production and hence their vulnerability to climate changes. In addition, social networks and use of hired labour positively influences crop productivity. Overally, development policies that increase household income, boost livestock ownership and enhance social capital improve crop production, which is critical to boost household adaptive capacity to climate change. There is need to link climate change policies to broader rural development policies especially in developing nations

    Climate variability and change or multiple stressors? Farmer perceptions regarding threats to livelihoods in Zimbabwe and Zambia

    Get PDF
    Climate variability is set to increase, characterised by extreme conditions in Africa. Southern Africa will likely get drier and experience more extreme weather conditions, particularly droughts and floods. However, while climate risks are acknowledged to be a serious threat to smallholder farmers’ livelihoods, these risks do not exist in isolation, but rather, compound a multiplicity of stressors. It was important for this study to understand farmer perceptions regarding the role of climate risks within a complex and multifarious set of risks to farmers’ livelihoods. This study used both qualitative and quantitative methods to investigate farmers’ perceptions regarding threats to livelihoods in southern Zambia and south-western Zimbabwe. While farmers report changes in local climatic conditions consistent with climate variability, there is a problem in assigning contribution of climate variability and other factors to observed negative impacts on the agricultural and socio-economic system. Furthermore, while there is a multiplicity of stressors that confront farmers, climate variability remains the most critical and exacerbate livelihood insecurity for those farmers with higher levels of vulnerability to these stressor

    The impact of changing the diagnostic algorithm for TB in Manicaland, Zimbabwe.

    Get PDF
    SETTING: Governmental health facilities performing TB diagnostics in Manicaland, Zimbabwe. OBJECTIVE: To investigate the effect of making XpertÂŽ MTB/RIF the primary TB diagnostic for all patients presenting with presumptive TB on 1) the number of samples investigated for TB, 2) the proportion testing TB-positive, and 3) the proportion of unsuccessful results over time. DESIGN: This retrospective study used data from GeneX-pert downloads, laboratory registers and quality assurance reports between 1 January 2017 and 31 December 2018. RESULTS: The total number of Xpert tests performed in Manicaland increased from 3,967 in the first quarter of 2017 to 7,011 in the last quarter of 2018. Mycobacterium tuberculosis DNA was detected in 4.9-8.6% of the samples investigated using Xpert, with a higher yield in 2017 than in 2018. The overall proportion of unsuccessful Xpert assays due to "no results", errors and invalid results was 6.3%, and highly variable across sites. CONCLUSION: Roll out of more sensitive TB diagnostics does not necessarily result in an increase of microbiologically confirmed TB diagnosis. While the number of samples tested using Xpert increased, the proportion of TB-positive tests decreased. GeneXpert soft- and hardware infrastructure needs to be strengthened to reduce the rate of unsuccessful assays and therefore, costs and staff time

    Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Full text link
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare

    Get PDF
    Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI

    Reply to Talbot et al

    No full text
    corecore