4 research outputs found

    Systematic Review and Meta-Analysis of Prehospital Machine Learning Scores as Screening Tools for Early Detection of Large Vessel Occlusion in Patients With Suspected Stroke.

    Get PDF
    BackgroundEnhanced detection of large vessel occlusion (LVO) through machine learning (ML) for acute ischemic stroke appears promising. This systematic review explored the capabilities of ML models compared with prehospital stroke scales for LVO prediction.Methods and resultsSix bibliographic databases were searched from inception until October 10, 2023. Meta-analyses pooled the model performance using area under the curve (AUC), sensitivity, specificity, and summary receiver operating characteristic curve. Of 1544 studies screened, 8 retrospective studies were eligible, including 32 prehospital stroke scales and 21 ML models. Of the 9 prehospital scales meta-analyzed, the Rapid Arterial Occlusion Evaluation had the highest pooled AUC (0.82 [95% CI, 0.79-0.84]). Support Vector Machine achieved the highest AUC of 9 ML models included (pooled AUC, 0.89 [95% CI, 0.88-0.89]). Six prehospital stroke scales and 10 ML models were eligible for summary receiver operating characteristic analysis. Pooled sensitivity and specificity for any prehospital stroke scale were 0.72 (95% CI, 0.68-0.75) and 0.77 (95% CI, 0.72-0.81), respectively; summary receiver operating characteristic curve AUC was 0.80 (95% CI, 0.76-0.83). Pooled sensitivity for any ML model for LVO was 0.73 (95% CI, 0.64-0.79), specificity was 0.85 (95% CI, 0.80-0.89), and summary receiver operating characteristic curve AUC was 0.87 (95% CI, 0.83-0.89).ConclusionsBoth prehospital stroke scales and ML models demonstrated varying accuracies in predicting LVO. Despite ML potential for improved LVO detection in the prehospital setting, application remains limited by the absence of prospective external validation, limited sample sizes, and lack of real-world performance data in a prehospital setting

    Multi-granularity learning of explicit geometric constraint and contrast for label-efficient medical image segmentation and differentiable clinical function assessment

    No full text
    Automated segmentation is a challenging task in medical image analysis that usually requires a large amount of manually labeled data. However, most current supervised learning based algorithms suffer from insufficient manual annotations, posing a significant difficulty for accurate and robust segmentation. In addition, most current semi-supervised methods lack explicit representations of geometric structure and semantic information, restricting segmentation accuracy. In this work, we propose a hybrid framework to learn polygon vertices, region masks, and their boundaries in a weakly/semi-supervised manner that significantly advances geometric and semantic representations. Firstly, we propose multi-granularity learning of explicit geometric structure constraints via polygon vertices (PolyV) and pixel-wise region (PixelR) segmentation masks in a semi-supervised manner. Secondly, we propose eliminating boundary ambiguity by using an explicit contrastive objective to learn a discriminative feature space of boundary contours at the pixel level with limited annotations. Thirdly, we exploit the task-specific clinical domain knowledge to differentiate the clinical function assessment end-to-end. The ground truth of clinical function assessment, on the other hand, can serve as auxiliary weak supervision for PolyV and PixelR learning. We evaluate the proposed framework on two tasks, including optic disc (OD) and cup (OC) segmentation along with vertical cup-to-disc ratio (vCDR) estimation in fundus images; left ventricle (LV) segmentation at end-diastolic and end-systolic frames along with ejection fraction (LV ) estimation in two-dimensional echocardiography images. Experiments on nine large-scale datasets of the two tasks under different label settings demonstrate our model's superior performance on segmentation and clinical function assessment

    Evaluation of Huawei Smart Wearables for Detection of Atrial Fibrillation in Patients Following Ischaemic Stroke:The Liverpool-Huawei Stroke Study

    No full text
    Atrial fibrillation (AF) often remains undetected following stroke. Documenting AF is critical to initiate oral anticoagulation, which has proven benefit in reducing recurrent stroke and mortality in patients with AF. The accuracy and acceptability of using smart wearables technology to detect AF in patients following stroke is unknown. The aims of the Liverpool-Huawei Stroke Study are to determine the effectiveness, cost-effectiveness and patient and staff acceptability of using Huawei smart wearables to detect AF following ischaemic stroke. The study plans to recruit 1000 adults aged ≥18 years following ischaemic stroke from participating hospitals over 12 months. All participants will be asked to wear a Huawei smart band for four weeks post-discharge. If participants do not have access to a compatible smartphone required for the study, they will be provided with a smartphone for the four-week AF monitoring period. Participants with suspected AF detected by the smart wearables, without previous known AF, will be referred for further evaluation. To determine the effectiveness of the Huawei smart wearables to detect AF, the positive predictive value will be determined. Patient acceptability of using this technology will also be examined. Additional follow-up assessments will be conducted at six and 12 months, and clinical outcomes recorded in relation to prevalent and incident AF post-stroke. The study opened for recruitment on 30/05/2022, and is currently open at four participating hospitals; the first 106 participants have been recruited. One further hospital is preparing to open for recruitment. This prospective study will examine the effectiveness and acceptability of the use of smart wearables in patients following ischaemic stroke. This could have important implications for detection of AF and therefore, earlier prophylaxis for recurrent stroke. The study is registered on https://www.isrctn.com/ (Identifier ISRCTN30693819)
    corecore