22 research outputs found
Automatic Detection of B-lines in Lung Ultrasound Videos From Severe Dengue Patients
Lung ultrasound (LUS) imaging is used to assess lung abnormalities, including
the presence of B-line artefacts due to fluid leakage into the lungs caused by
a variety of diseases. However, manual detection of these artefacts is
challenging. In this paper, we propose a novel methodology to automatically
detect and localize B-lines in LUS videos using deep neural networks trained
with weak labels. To this end, we combine a convolutional neural network (CNN)
with a long short-term memory (LSTM) network and a temporal attention
mechanism. Four different models are compared using data from 60 patients.
Results show that our best model can determine whether one-second clips contain
B-lines or not with an F1 score of 0.81, and extracts a representative frame
with B-lines with an accuracy of 87.5%.Comment: 5 pages, 2 figures, 2 table
Recommended from our members
Association of Genetic Variants With Primary Open-Angle Glaucoma Among Individuals With African Ancestry.
Importance:Primary open-angle glaucoma presents with increased prevalence and a higher degree of clinical severity in populations of African ancestry compared with European or Asian ancestry. Despite this, individuals of African ancestry remain understudied in genomic research for blinding disorders. Objectives:To perform a genome-wide association study (GWAS) of African ancestry populations and evaluate potential mechanisms of pathogenesis for loci associated with primary open-angle glaucoma. Design, Settings, and Participants:A 2-stage GWAS with a discovery data set of 2320 individuals with primary open-angle glaucoma and 2121 control individuals without primary open-angle glaucoma. The validation stage included an additional 6937 affected individuals and 14 917 unaffected individuals using multicenter clinic- and population-based participant recruitment approaches. Study participants were recruited from Ghana, Nigeria, South Africa, the United States, Tanzania, Britain, Cameroon, Saudi Arabia, Brazil, the Democratic Republic of the Congo, Morocco, Peru, and Mali from 2003 to 2018. Individuals with primary open-angle glaucoma had open iridocorneal angles and displayed glaucomatous optic neuropathy with visual field defects. Elevated intraocular pressure was not included in the case definition. Control individuals had no elevated intraocular pressure and no signs of glaucoma. Exposures:Genetic variants associated with primary open-angle glaucoma. Main Outcomes and Measures:Presence of primary open-angle glaucoma. Genome-wide significance was defined as P < 5 × 10-8 in the discovery stage and in the meta-analysis of combined discovery and validation data. Results:A total of 2320 individuals with primary open-angle glaucoma (mean [interquartile range] age, 64.6 [56-74] years; 1055 [45.5%] women) and 2121 individuals without primary open-angle glaucoma (mean [interquartile range] age, 63.4 [55-71] years; 1025 [48.3%] women) were included in the discovery GWAS. The GWAS discovery meta-analysis demonstrated association of variants at amyloid-β A4 precursor protein-binding family B member 2 (APBB2; chromosome 4, rs59892895T>C) with primary open-angle glaucoma (odds ratio [OR], 1.32 [95% CI, 1.20-1.46]; P = 2 × 10-8). The association was validated in an analysis of an additional 6937 affected individuals and 14 917 unaffected individuals (OR, 1.15 [95% CI, 1.09-1.21]; P < .001). Each copy of the rs59892895*C risk allele was associated with increased risk of primary open-angle glaucoma when all data were included in a meta-analysis (OR, 1.19 [95% CI, 1.14-1.25]; P = 4 × 10-13). The rs59892895*C risk allele was present at appreciable frequency only in African ancestry populations. In contrast, the rs59892895*C risk allele had a frequency of less than 0.1% in individuals of European or Asian ancestry. Conclusions and Relevance:In this genome-wide association study, variants at the APBB2 locus demonstrated differential association with primary open-angle glaucoma by ancestry. If validated in additional populations this finding may have implications for risk assessment and therapeutic strategies
Sepsis mortality prediction using wearable monitoring in low-middle income countries
Sepsis is associated with high mortality-particularly in low-middle income countries (LMICs). Critical care management of sepsis is challenging in LMICs due to the lack of care providers and the high cost of bedside monitors. Recent advances in wearable sensor technology and machine learning (ML) models in healthcare promise to deliver new ways of digital monitoring integrated with automated decision systems to reduce the mortality risk in sepsis. In this study, firstly, we aim to assess the feasibility of using wearable sensors instead of traditional bedside monitors in the sepsis care management of hospital admitted patients, and secondly, to introduce automated prediction models for the mortality prediction of sepsis patients. To this end, we continuously monitored 50 sepsis patients for nearly 24 h after their admission to the Hospital for Tropical Diseases in Vietnam. We then compared the performance and interpretability of state-of-the-art ML models for the task of mortality prediction of sepsis using the heart rate variability (HRV) signal from wearable sensors and vital signs from bedside monitors. Our results show that all ML models trained on wearable data outperformed ML models trained on data gathered from the bedside monitors for the task of mortality prediction with the highest performance (area under the precision recall curve = 0.83) achieved using time-varying features of HRV and recurrent neural networks. Our results demonstrate that the integration of automated ML prediction models with wearable technology is well suited for helping clinicians who manage sepsis patients in LMICs to reduce the mortality risk of sepsis
B-line detection and localization in lung ultrasound videos using spatiotemporal attention
The presence of B-line artefacts, the main artefact reflecting lung abnormalities in dengue patients, is often assessed using lung ultrasound (LUS) imaging. Inspired by human visual attention that enables us to process videos efficiently by paying attention to where and when it is required, we propose a spatiotemporal attention mechanism for B-line detection in LUS videos. The spatial attention allows the model to focus on the most task relevant parts of the image by learning a saliency map. The temporal attention generates an attention score for each attended frame to identify the most relevant frames from an input video. Our model not only identifies videos where B-lines show, but also localizes, within those videos, B-line related features both spatially and temporally, despite being trained in a weakly-supervised manner. We evaluate our approach on a LUS video dataset collected from severe dengue patients in a resource-limited hospital, assessing the B-line detection rate and the model’s ability to localize discriminative B-line regions spatially and B-line frames temporally. Experimental results demonstrate the efficacy of our approach for classifying B-line videos with an F1 score of up to 83.2% and localizing the most salient B-line regions both spatially and temporally with a correlation coefficient of 0.67 and an IoU of 69.7%, respectively
Clinical benefit of AI-assisted lung ultrasound in a resource-limited intensive care unit.
BackgroundInterpreting point-of-care lung ultrasound (LUS) images from intensive care unit (ICU) patients can be challenging, especially in low- and middle- income countries (LMICs) where there is limited training available. Despite recent advances in the use of Artificial Intelligence (AI) to automate many ultrasound imaging analysis tasks, no AI-enabled LUS solutions have been proven to be clinically useful in ICUs, and specifically in LMICs. Therefore, we developed an AI solution that assists LUS practitioners and assessed its usefulness in  a low resource ICU.MethodsThis was a three-phase prospective study. In the first phase, the performance of four different clinical user groups in interpreting LUS clips was assessed. In the second phase, the performance of 57 non-expert clinicians with and without the aid of a bespoke AI tool for LUS interpretation was assessed in retrospective offline clips. In the third phase, we conducted a prospective study in the ICU where 14 clinicians were asked to carry out LUS examinations in 7 patients with and without our AI tool and we interviewed the clinicians regarding the usability of the AI tool.ResultsThe average accuracy of beginners' LUS interpretation was 68.7% [95% CI 66.8-70.7%] compared to 72.2% [95% CI 70.0-75.6%] in intermediate, and 73.4% [95% CI 62.2-87.8%] in advanced users. Experts had an average accuracy of 95.0% [95% CI 88.2-100.0%], which was significantly better than beginners, intermediate and advanced users (p ConclusionsAI-assisted LUS can help non-expert clinicians in an LMIC ICU improve their performance in interpreting LUS features more accurately, more quickly and more confidently