15,349 research outputs found
Adaptive Segmentation of Knee Radiographs for Selecting the Optimal ROI in Texture Analysis
The purposes of this study were to investigate: 1) the effect of placement of
region-of-interest (ROI) for texture analysis of subchondral bone in knee
radiographs, and 2) the ability of several texture descriptors to distinguish
between the knees with and without radiographic osteoarthritis (OA). Bilateral
posterior-anterior knee radiographs were analyzed from the baseline of OAI and
MOST datasets. A fully automatic method to locate the most informative region
from subchondral bone using adaptive segmentation was developed. We used an
oversegmentation strategy for partitioning knee images into the compact regions
that follow natural texture boundaries. LBP, Fractal Dimension (FD), Haralick
features, Shannon entropy, and HOG methods were computed within the standard
ROI and within the proposed adaptive ROIs. Subsequently, we built logistic
regression models to identify and compare the performances of each texture
descriptor and each ROI placement method using 5-fold cross validation setting.
Importantly, we also investigated the generalizability of our approach by
training the models on OAI and testing them on MOST dataset.We used area under
the receiver operating characteristic (ROC) curve (AUC) and average precision
(AP) obtained from the precision-recall (PR) curve to compare the results. We
found that the adaptive ROI improves the classification performance (OA vs.
non-OA) over the commonly used standard ROI (up to 9% percent increase in AUC).
We also observed that, from all texture parameters, LBP yielded the best
performance in all settings with the best AUC of 0.840 [0.825, 0.852] and
associated AP of 0.804 [0.786, 0.820]. Compared to the current state-of-the-art
approaches, our results suggest that the proposed adaptive ROI approach in
texture analysis of subchondral bone can increase the diagnostic performance
for detecting the presence of radiographic OA
A supervised clustering approach for fMRI-based inference of brain states
We propose a method that combines signals from many brain regions observed in
functional Magnetic Resonance Imaging (fMRI) to predict the subject's behavior
during a scanning session. Such predictions suffer from the huge number of
brain regions sampled on the voxel grid of standard fMRI data sets: the curse
of dimensionality. Dimensionality reduction is thus needed, but it is often
performed using a univariate feature selection procedure, that handles neither
the spatial structure of the images, nor the multivariate nature of the signal.
By introducing a hierarchical clustering of the brain volume that incorporates
connectivity constraints, we reduce the span of the possible spatial
configurations to a single tree of nested regions tailored to the signal. We
then prune the tree in a supervised setting, hence the name supervised
clustering, in order to extract a parcellation (division of the volume) such
that parcel-based signal averages best predict the target information.
Dimensionality reduction is thus achieved by feature agglomeration, and the
constructed features now provide a multi-scale representation of the signal.
Comparisons with reference methods on both simulated and real data show that
our approach yields higher prediction accuracy than standard voxel-based
approaches. Moreover, the method infers an explicit weighting of the regions
involved in the regression or classification task
Classification of EEG recordings in auditory brain activity via a logistic functional linear regression model
We want to analyse EEG recordings in order to investigate the phonemic
categorization at a very early stage of auditory processing. This problem can
be modelled by a supervised classification of functional data. Discrimination
is explored via a logistic functional linear model, using a wavelet
representation of the data. Different procedures are investigated, based on
penalized likelihood and principal component reduction or partial least squares
reduction
Tile2Vec: Unsupervised representation learning for spatially distributed data
Geospatial analysis lacks methods like the word vector representations and
pre-trained networks that significantly boost performance across a wide range
of natural language and computer vision tasks. To fill this gap, we introduce
Tile2Vec, an unsupervised representation learning algorithm that extends the
distributional hypothesis from natural language -- words appearing in similar
contexts tend to have similar meanings -- to spatially distributed data. We
demonstrate empirically that Tile2Vec learns semantically meaningful
representations on three datasets. Our learned representations significantly
improve performance in downstream classification tasks and, similar to word
vectors, visual analogies can be obtained via simple arithmetic in the latent
space.Comment: 8 pages, 4 figures in main text; 9 pages, 11 figures in appendi
- …