9,589 research outputs found
Improving utility of brain tumor confocal laser endomicroscopy: objective value assessment and diagnostic frame detection with convolutional neural networks
Confocal laser endomicroscopy (CLE), although capable of obtaining images at
cellular resolution during surgery of brain tumors in real time, creates as
many non-diagnostic as diagnostic images. Non-useful images are often distorted
due to relative motion between probe and brain or blood artifacts. Many images,
however, simply lack diagnostic features immediately informative to the
physician. Examining all the hundreds or thousands of images from a single case
to discriminate diagnostic images from nondiagnostic ones can be tedious.
Providing a real-time diagnostic value assessment of images (fast enough to be
used during the surgical acquisition process and accurate enough for the
pathologist to rely on) to automatically detect diagnostic frames would
streamline the analysis of images and filter useful images for the
pathologist/surgeon. We sought to automatically classify images as diagnostic
or non-diagnostic. AlexNet, a deep-learning architecture, was used in a 4-fold
cross validation manner. Our dataset includes 16,795 images (8572 nondiagnostic
and 8223 diagnostic) from 74 CLE-aided brain tumor surgery patients. The ground
truth for all the images is provided by the pathologist. Average model accuracy
on test data was 91% overall (90.79 % accuracy, 90.94 % sensitivity and 90.87 %
specificity). To evaluate the model reliability we also performed receiver
operating characteristic (ROC) analysis yielding 0.958 average for the area
under ROC curve (AUC). These results demonstrate that a deeply trained AlexNet
network can achieve a model that reliably and quickly recognizes diagnostic CLE
images.Comment: SPIE Medical Imaging: Computer-Aided Diagnosis 201
Detection of atrial fibrillation episodes in long-term heart rhythm signals using a support vector machine
Atrial fibrillation (AF) is a serious heart arrhythmia leading to a significant increase of the risk for occurrence of ischemic stroke. Clinically, the AF episode is recognized in an electrocardiogram. However, detection of asymptomatic AF, which requires a long-term monitoring, is more efficient when based on irregularity of beat-to-beat intervals estimated by the heart rate (HR) features. Automated classification of heartbeats into AF and non-AF by means of the Lagrangian Support Vector Machine has been proposed. The classifier input vector consisted of sixteen features, including four coefficients very sensitive to beat-to-beat heart changes, taken from the fetal heart rate analysis in perinatal medicine. Effectiveness of the proposed classifier has been verified on the MIT-BIH Atrial Fibrillation Database. Designing of the LSVM classifier using very large number of feature vectors requires extreme computational efforts. Therefore, an original approach has been proposed to determine a training set of the smallest possible size that still would guarantee a high quality of AF detection. It enables to obtain satisfactory results using only 1.39% of all heartbeats as the training data. Post-processing stage based on aggregation of classified heartbeats into AF episodes has been applied to provide more reliable information on patient risk. Results obtained during the testing phase showed the sensitivity of 98.94%, positive predictive value of 98.39%, and classification accuracy of 98.86%.Web of Science203art. no. 76
Flexible combination of multiple diagnostic biomarkers to improve diagnostic accuracy
In medical research, it is common to collect information of multiple
continuous biomarkers to improve the accuracy of diagnostic tests. Combining
the measurements of these biomarkers into one single score is a popular
practice to integrate the collected information, where the accuracy of the
resultant diagnostic test is usually improved. To measure the accuracy of a
diagnostic test, the Youden index has been widely used in literature. Various
parametric and nonparametric methods have been proposed to linearly combine
biomarkers so that the corresponding Youden index can be optimized. Yet there
seems to be little justification of enforcing such a linear combination. This
paper proposes a flexible approach that allows both linear and nonlinear
combinations of biomarkers. The proposed approach formulates the problem in a
large margin classification framework, where the combination function is
embedded in a flexible reproducing kernel Hilbert space. Advantages of the
proposed approach are demonstrated in a variety of simulated experiments as
well as a real application to a liver disorder study
Recommended from our members
Choosing the proper link function for binary data
textSince generalized linear model (GLM) with binary response variable is widely used in many disciplines, many efforts have been made to construct a fit model. However, little attention is paid to the link functions, which play a critical role in GLM model. In this article, we compared three link functions and evaluated different model selection methods based on these three link functions. Also, we provided some suggestions on how to choose the proper link function for binary data.Statistic
Recommended from our members
Multi-line Adaptive Perimetry (MAP): A New Procedure for Quantifying Visual Field Integrity for Rapid Assessment of Macular Diseases.
PurposeIn order to monitor visual defects associated with macular degeneration (MD), we present a new psychophysical assessment called multiline adaptive perimetry (MAP) that measures visual field integrity by simultaneously estimating regions associated with perceptual distortions (metamorphopsia) and visual sensitivity loss (scotoma).MethodsWe first ran simulations of MAP with a computerized model of a human observer to determine optimal test design characteristics. In experiment 1, predictions of the model were assessed by simulating metamorphopsia with an eye-tracking device with 20 healthy vision participants. In experiment 2, eight patients (16 eyes) with macular disease completed two MAP assessments separated by about 12 weeks, while a subset (10 eyes) also completed repeated Macular Integrity Assessment (MAIA) microperimetry and Amsler grid exams.ResultsResults revealed strong repeatability of MAP and high accuracy, sensitivity, and specificity (0.89, 0.81, and 0.90, respectively) in classifying patient eyes with severe visual impairment. We also found a significant relationship in terms of the spatial patterns of performance across visual field loci derived from MAP and MAIA microperimetry. However, there was a lack of correspondence between MAP and subjective Amsler grid reports in isolating perceptually distorted regions.ConclusionsThese results highlight the validity and efficacy of MAP in producing quantitative maps of visual field disturbances, including simultaneous mapping of metamorphopsia and sensitivity impairment.Translational relevanceFuture work will be needed to assess applicability of this examination for potential early detection of MD symptoms and/or portable assessment on a home device or computer
Noise reduction in muon tomography for detecting high density objects
The muon tomography technique, based on multiple Coulomb scattering of cosmic
ray muons, has been proposed as a tool to detect the presence of high density
objects inside closed volumes. In this paper a new and innovative method is
presented to handle the density fluctuations (noise) of reconstructed images, a
well known problem of this technique. The effectiveness of our method is
evaluated using experimental data obtained with a muon tomography prototype
located at the Legnaro National Laboratories (LNL) of the Istituto Nazionale di
Fisica Nucleare (INFN). The results reported in this paper, obtained with real
cosmic ray data, show that with appropriate image filtering and muon momentum
classification, the muon tomography technique can detect high density
materials, such as lead, albeit surrounded by light or medium density material,
in short times. A comparison with algorithms published in literature is also
presented
Kernel matrix regression
We address the problem of filling missing entries in a kernel Gram matrix,
given a related full Gram matrix. We attack this problem from the viewpoint of
regression, assuming that the two kernel matrices can be considered as
explanatory variables and response variables, respectively. We propose a
variant of the regression model based on the underlying features in the
reproducing kernel Hilbert space by modifying the idea of kernel canonical
correlation analysis, and we estimate the missing entries by fitting this model
to the existing samples. We obtain promising experimental results on gene
network inference and protein 3D structure prediction from genomic datasets. We
also discuss the relationship with the em-algorithm based on information
geometry
- …