11 research outputs found
Statistical analysis of facial landmark data for optimisation of Fetal Alcohol Syndrome diagnosis
Includes bibliographical references (leaves 100-104).This project involved the statistical analysis of facial landmark used in Fetal Alcohol Syndrome (FAS) diagnosis. FAS is a clinical condition caused by excessive maternal consumption of alcohol during pregnancy. Diagnosis of FAS depends on evidence of growth retardation, CNS neurodevelopment abnormalities, and a characteristic pattern of facial anomalies, specifically a short palpebral fissure length, smooth philtrum, flat upper lip and flat midface. The unique facial appearance associated with FAS is emphasized in diagnosis that relies, in part, on the comparison of linear measurements of facial features to population norms
Characterization of the facial phenotype associated with fetal alcohol syndrome using stereo-photogrammetry and geometric morphometrics
Includes abstract.Includes bibliographical references (leaves 108-118).Fetal Alcohol Syndrome (FAS) is a clinical condition caused by excessive pre-natal alcohol exposure and is regarded as a leading identifiable and preventable cause of mental retardation in the Western world. The highest prevalence of FAS was reported in the wine-growing regions of South Africa but data for the rest of the country is not available. Required, therefore, are large-scale screening and surveillance programmes to be conducted in South Africa in order for the epidemiology of the disease to be understood. Efforts to this end have been stymied by the cost and labour-intensive nature of collecting the facial anthropometric data useful in FAS diagnosis. Stereo-photogrammetry provides a low cost, easy to use and non-invasive alternative to traditional facial anthropometry. The design and implementation of a landmark-based stereo-photogrammetry system to obtain 3D facial information for fetal alcohol syndrome diagnosis (FAS) is described. The system consists of three high resolution digital cameras resting on a purpose-built stand and a control frame which surrounds the subject's head during imaging. Reliability and assessments of accuracy for the stereo-photogrammetric tool are presented using 275 inter-landmark distance comparisons between the system and direct anthropometry using a doll. These showed the system to be highly reliable and precise
Bridging the Gap: Generalising State-of-the-Art U-Net Models to Sub-Saharan African Populations
A critical challenge for tumour segmentation models is the ability to adapt
to diverse clinical settings, particularly when applied to poor-quality
neuroimaging data. The uncertainty surrounding this adaptation stems from the
lack of representative datasets, leaving top-performing models without exposure
to common artifacts found in MRI data throughout Sub-Saharan Africa (SSA). We
replicated a framework that secured the 2nd position in the 2022 BraTS
competition to investigate the impact of dataset composition on model
performance and pursued four distinct approaches through training a model with:
1) BraTS-Africa data only (train_SSA, N=60), 2) BraTS-Adult Glioma data only
(train_GLI, N=1251), 3) both datasets together (train_ALL, N=1311), and 4)
through further training the train_GLI model with BraTS-Africa data
(train_ftSSA). Notably, training on a smaller low-quality dataset alone
(train_SSA) yielded subpar results, and training on a larger high-quality
dataset alone (train_GLI) struggled to delineate oedematous tissue in the
low-quality validation set. The most promising approach (train_ftSSA) involved
pre-training a model on high-quality neuroimages and then fine-tuning it on the
smaller, low-quality dataset. This approach outperformed the others, ranking
second in the MICCAI BraTS Africa global challenge external testing phase.
These findings underscore the significance of larger sample sizes and broad
exposure to data in improving segmentation performance. Furthermore, we
demonstrated that there is potential for improving such models by fine-tuning
them with a wider range of data locally.Comment: 14 pages, 5 figures, 3 table
Future-ai:International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and healthcare, the deployment and adoption of AI technologies remain limited in real-world clinical practice. In recent years, concerns have been raised about the technical, clinical, ethical and legal risks associated with medical AI. To increase real world adoption, it is essential that medical AI tools are trusted and accepted by patients, clinicians, health organisations and authorities. This work describes the FUTURE-AI guideline as the first international consensus framework for guiding the development and deployment of trustworthy AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and currently comprises 118 inter-disciplinary experts from 51 countries representing all continents, including AI scientists, clinicians, ethicists, and social scientists. Over a two-year period, the consortium defined guiding principles and best practices for trustworthy AI through an iterative process comprising an in-depth literature review, a modified Delphi survey, and online consensus meetings. The FUTURE-AI framework was established based on 6 guiding principles for trustworthy AI in healthcare, i.e. Fairness, Universality, Traceability, Usability, Robustness and Explainability. Through consensus, a set of 28 best practices were defined, addressing technical, clinical, legal and socio-ethical dimensions. The recommendations cover the entire lifecycle of medical AI, from design, development and validation to regulation, deployment, and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which provides a structured approach for constructing medical AI tools that will be trusted, deployed and adopted in real-world practice. Researchers are encouraged to take the recommendations into account in proof-of-concept stages to facilitate future translation towards clinical practice of medical AI
FUTURE-AI: International consensus guideline for trustworthy and deployable artificial intelligence in healthcare
Despite major advances in artificial intelligence (AI) for medicine and
healthcare, the deployment and adoption of AI technologies remain limited in
real-world clinical practice. In recent years, concerns have been raised about
the technical, clinical, ethical and legal risks associated with medical AI. To
increase real world adoption, it is essential that medical AI tools are trusted
and accepted by patients, clinicians, health organisations and authorities.
This work describes the FUTURE-AI guideline as the first international
consensus framework for guiding the development and deployment of trustworthy
AI tools in healthcare. The FUTURE-AI consortium was founded in 2021 and
currently comprises 118 inter-disciplinary experts from 51 countries
representing all continents, including AI scientists, clinicians, ethicists,
and social scientists. Over a two-year period, the consortium defined guiding
principles and best practices for trustworthy AI through an iterative process
comprising an in-depth literature review, a modified Delphi survey, and online
consensus meetings. The FUTURE-AI framework was established based on 6 guiding
principles for trustworthy AI in healthcare, i.e. Fairness, Universality,
Traceability, Usability, Robustness and Explainability. Through consensus, a
set of 28 best practices were defined, addressing technical, clinical, legal
and socio-ethical dimensions. The recommendations cover the entire lifecycle of
medical AI, from design, development and validation to regulation, deployment,
and monitoring. FUTURE-AI is a risk-informed, assumption-free guideline which
provides a structured approach for constructing medical AI tools that will be
trusted, deployed and adopted in real-world practice. Researchers are
encouraged to take the recommendations into account in proof-of-concept stages
to facilitate future translation towards clinical practice of medical AI
Recommended from our members
Editorial: Statistical model-based computational biomechanics: applications in joints and internal organs
Peer reviewed: Tru
Recommended from our members