163 research outputs found
Prediction of atrial fibrillation and stroke using machine learning models in UK Biobank
Objective: Atrial fibrillation (AF) is the most common cardiac arrythmia, and it is associated with increased risk for ischemic stroke, which is underestimated, as AF can be asymptomatic. The aim of this study was to develop optimal ML models for prediction of AF in the population, and secondly for ischemic stroke in AF patients. Methods: To develop ML models for prediction of 1) AF in the general population and 2) ischemic stroke in patients with AF we constructed XGBoost, LightGBM, Random Forest, Deep Neural Network, Support Vector Machine and Lasso penalised logistic regression models using UK-Biobank's extensive real-world clinical data, questionnaires, as well as biochemical and genetic data, and their predictive performances were compared. Ranking and contribution of the different features was assessed by SHapley Additive exPlanations (SHAP) analysis. The clinical tool CHA2DS2-VASc for prediction of ischemic stroke among AF patients, was used for comparison to the best performing ML model. Findings: The best performing model for AF prediction was LightGBM, with an area-under-the-roc-curve (AUROC) of 0.729 (95% confidence intervals (CI): 0.719, 0.738). The best performing model for ischemic stroke prediction in AF patients was XGBoost with AUROC of 0.631 (95% CI: 0.604, 0.657). The improved AUROC in the XGBoost model compared to CHA2DS2-VASc was statistically significant based on DeLong's test (p-value = 2.20E-06). In addition, the SHAP analysis showed that several peripheral blood biomarkers (e.g. creatinine, glycated haemoglobin, monocytes) were associated with ischemic stroke, which are not considered by CHA2DS2-VASc. Implications: The best performing ML models presented have the potential for clinical use, but further validation in independent studies is required. Our results endorse the incorporation of some routinely measured blood biomarkers for ischemic stroke prediction in AF patients
Applications of machine and deep learning to thyroid cytology and histopathology: a review.
This review synthesises past research into how machine and deep learning can improve the cyto- and histopathology processing pipelines for thyroid cancer diagnosis. The current gold-standard preoperative technique of fine-needle aspiration cytology has high interobserver variability, often returns indeterminate samples and cannot reliably identify some pathologies; histopathology analysis addresses these issues to an extent, but it requires surgical resection of the suspicious lesions so cannot influence preoperative decisions. Motivated by these issues, as well as by the chronic shortage of trained pathologists, much research has been conducted into how artificial intelligence could improve current pipelines and reduce the pressure on clinicians. Many past studies have indicated the significant potential of automated image analysis in classifying thyroid lesions, particularly for those of papillary thyroid carcinoma, but these have generally been retrospective, so questions remain about both the practical efficacy of these automated tools and the realities of integrating them into clinical workflows. Furthermore, the nature of thyroid lesion classification is significantly more nuanced in practice than many current studies have addressed, and this, along with the heterogeneous nature of processing pipelines in different laboratories, means that no solution has proven itself robust enough for clinical adoption. There are, therefore, multiple avenues for future research: examine the practical implementation of these algorithms as pathologist decision-support systems; improve interpretability, which is necessary for developing trust with clinicians and regulators; and investigate multiclassification on diverse multicentre datasets, aiming for methods that demonstrate high performance in a process- and equipment-agnostic manner
Recommended from our members
Generating a 3d hand model from frontal color and range scans
Realistic 3D modeling of human hand anatomy has a number of important applications, including real-time tracking, pose estimation, and human-computer interaction. However the use of RGB-D sensors to accurately capture the full 3D shape of a hand is limited by self-occlusions, relatively smaller size of the hand and the requirement to capture multiple images. In this paper, we propose a method for generating a detailed, realistic hand model from a single frontal range scan and registered color image. In essence, our method converts this 2.5D data into a fully 3D model. The proposed approach extracts joint locations from the color image using a fingertip and interfinger region detector with a Naive Bayes probabilistic model. Direct correspondence between these joint locations in the range scan and a synthetic hand model are used to perform rigid registration, followed by a thin-plate-spline deformation that non-rigidly registers a synthetic model. This reconstructed model maintains similar geometric properties as the range scan, but also includes the back side of the hand. Experimental results demonstrate the promise of the method to produce detailed and realistic 3D hand geometry
Recommended from our members
Quantized Census for Stereoscopic Image Matching
Current depth capturing devices show serious drawbacks in certain applications, for example ego-centric depth recovery: they are cumbersome, have a high power requirement, and do not portray high resolution at near distance. Stereo-matching techniques are a suitable alternative, but whilst the idea behind these techniques is simple it is well known that recovery of an accurate disparity map by stereo-matching requires overcoming three main problems: occluded regions causing absence of corresponding pixels; existence of noise in the image capturing sensor and inconsistent color and brightness in the captured images. We propose a modified version of the Census-Hamming cost function which allows more robust matching with an emphasis on improving performance under radiometric variations of the input images
Recommended from our members
Data-driven Recovery of Hand Depth using Conditional Regressive Random Forest on Stereo Images
Hand pose is emerging as an important interface for human-computer interaction. This paper presents a data-driven method to estimate a high-quality depth map of a hand from a stereoscopic camera input by introducing a novel superpixel based regression framework that takes advantage of the smoothness of the depth surface of the hand. To this end, we introduce Conditional Regressive Random Forest (CRRF), a method that combines a Conditional Random Field (CRF) and a Regressive Random Forest (RRF) to model the mapping from a stereo RGB image pair to a depth image. The RRF provides a unary term that adaptively selects different stereo-matching measures as it implicitly determines matching pixels in a coarse-to-fine manner. While the RRF makes depth prediction for each super-pixel independently, the CRF unifies the prediction of depth by modeling pair-wise interactions between adjacent superpixels. Experimental results show that CRRF can generate a depth image more accurately than the leading contemporary techniques using an inexpensive stereo camera
Recommended from our members
Conditional Regressive Random Forest Stereo-based Hand Depth Recovery
This paper introduces Conditional Regressive Random Forest (CRRF), a novel method that combines a closed-form Conditional Random Field (CRF), using learned weights, and a Regressive Random Forest (RRF) that employs adaptively selected expert trees. CRRF is used to estimate a depth image of hand given stereo RGB inputs. CRRF uses a novel superpixel-based regression framework that takes advantage of the smoothness of the hand’s depth surface. A RRF unary term adaptively selects different stereo-matching measures as it implicitly determines matching pixels in a coarse-to-fine manner. CRRF also includes a pair-wise term that encourages smoothness between similar adjacent superpixels. Experimental results show that CRRF can produce high quality depth maps, even using an inexpensive RGB stereo camera and produces state-of-the-art results for hand depth estimation
Recommended from our members
Hand Pose Estimation Using Deep Stereovision and Markov-chain Monte Carlo
Hand pose is emerging as an important interface for human-computer interaction. The problem of hand pose estimation from passive stereo inputs has received less attention in the literature compared to active depth sensors. This paper seeks to address this gap by presenting a datadriven method to estimate a hand pose from a stereoscopic camera input, by introducing a stochastic approach to propose potential depth solutions to the observed stereo capture and evaluate these proposals using two convolutional neural networks (CNNs). The first CNN, configured in a Siamese network architecture, evaluates how consistent the proposed depth solution is to the observed stereo capture. The second CNN estimates a hand pose given the proposed depth. Unlike sequential approaches that reconstruct pose from a known depth, our method jointly optimizes the hand pose and depth estimation through Markov-chain Monte Carlo (MCMC) sampling. This way, pose estimation can correct for errors in depth estimation, and vice versa. Experimental results using an inexpensive stereo camera show that the proposed system more accurately measures pose better than competing methods
Generating Magnetic Resonance Spectroscopy Imaging Data of Brain Tumours from Linear, Non-Linear and Deep Learning Models.
Magnetic Resonance Spectroscopy (MRS) provides valuable information to help with the identification and understanding of brain tumors, yet MRS is not a widely available medical imaging modality. Aiming to counter this issue, this research draws on the advancements in machine learning techniques in other fields for the generation of artificial data. The generated methods were tested through the evaluation of their output against that of a real-world labelled MRS brain tumor data-set. Furthermore the resultant output from the generative techniques were each used to train separate traditional classifiers which were tested on a subset of the real MRS brain tumor dataset. The results suggest that there exist methods capable of producing accurate, ground truth based MRS voxels. These findings indicate that through generative techniques, large datasets can be made available for training deep, learning models for the use in brain tumor diagnosis
Recommended from our members
Pathotype-specific QTL for stem rust resistance in Lolium perenne
A genetic map populated with RAD and SSR markers was created from F1 progeny of a stem rust-susceptible and stem rust-resistant parent of perennial ryegrass (Lolium perenne). The map supplements a previous map of this population by having markers in common with several other Lolium spp. maps including EST-SSR anchor markers from a consensus map published by other researchers. A QTL analysis was conducted with disease severity and infection type data obtained by controlled inoculation of the population with each of two previously characterized pathotypes of Puccinia graminis subsp. graminicola that differ in virulence to different host plant genotypes in the F1 population. Each pathotype activated a specific QTL on one linkage group (LG): qLpPg1 on LG7 for pathotype 101, or qLpPg2 on LG1 for pathotype 106. Both pathotypes also activated a third QTL in common, qLpPg3 on LG6. Anchor markers, present on a consensus map, were located in proximity to each of the three QTL. These QTL had been detected also in previous experiments in which a genetically heterogeneous inoculum of the stem rust pathogen activated all three QTL together. The results of this and a previous study are consistent with the involvement of the pathotype-specific QTL in pathogen recognition and the pathotype-nonspecific QTL in a generalized resistance response. By aligning the markers common to other published reports, it appears that two and possibly all three of the stem rust QTL reported here are in the same general genomic regions containing some of the L. perenne QTL reported to be activated in response to the crown rust pathogen (P. coronata).Keywords: Ryegrass population, Fescue festuca pratensis, Disease resistance, Puccinia coronata resistance, SSR markers, Multiflorum lam., Crown rust, Self incompatibility, F-SP Lolii, Linkage mapKeywords: Ryegrass population, Fescue festuca pratensis, Disease resistance, Puccinia coronata resistance, SSR markers, Multiflorum lam., Crown rust, Self incompatibility, F-SP Lolii, Linkage ma
Recommended from our members
DeepFMRI: And End-to-End Deep Network for Classification of FRMI Data
With recent advancements in machine learning, the research community has made tremendous advances towards the classification of neurological disorders from time-series functional MRI signals. However, existing classification techniques rely on hand-crafted features and classical machine learning models. In this paper, we propose an end-to-end model that utilizes the representation learning capability of deep learning to classify a neurological disorder from fMRI data. The proposed DeepFMRI model is comprised of three networks, namely (1) a feature extractor, (2) a similarity network, and (3) a classification network. The model takes fMRI raw time-series signals as input and outputs the predicted labels; and is trained end-to-end using back-propagation. Experimental results on the publicly available ADHD-200 dataset demonstrate that this innovative model outperforms previous state-of-the-art
- …