24,445 research outputs found
Complex-valued Time Series Modeling for Improved Activation Detection in fMRI Studies
A complex-valued data-based model with th order autoregressive errors and general real/imaginary error covariance structure is proposed as an alternative to the commonly used magnitude-only data-based autoregressive model for fMRI time series. Likelihood-ratio-test-based activation statistics are derived for both models and compared for experimental and simulated data. For a dataset from a right-hand finger-tapping experiment, the activation map obtained using complex-valued modeling more clearly identifies the primary activation region (left functional central sulcus) than the magnitude-only model. Such improved accuracy in mapping the left functional central sulcus has important implications in neurosurgical planning for tumor and epilepsy patients. Additionally, we develop magnitude and phase detrending procedures for complex-valued time series and examine the effect of spatial smoothing. These methods improve the power of complex-valued data-based activation statistics. Our results advocate for the use of the complex-valued data and the modeling of its dependence structures as a more efficient and reliable tool in fMRI experiments over the current practice of using only magnitude-valued datasets
Monte Carlo-based Noise Compensation in Coil Intensity Corrected Endorectal MRI
Background: Prostate cancer is one of the most common forms of cancer found
in males making early diagnosis important. Magnetic resonance imaging (MRI) has
been useful in visualizing and localizing tumor candidates and with the use of
endorectal coils (ERC), the signal-to-noise ratio (SNR) can be improved. The
coils introduce intensity inhomogeneities and the surface coil intensity
correction built into MRI scanners is used to reduce these inhomogeneities.
However, the correction typically performed at the MRI scanner level leads to
noise amplification and noise level variations. Methods: In this study, we
introduce a new Monte Carlo-based noise compensation approach for coil
intensity corrected endorectal MRI which allows for effective noise
compensation and preservation of details within the prostate. The approach
accounts for the ERC SNR profile via a spatially-adaptive noise model for
correcting non-stationary noise variations. Such a method is useful
particularly for improving the image quality of coil intensity corrected
endorectal MRI data performed at the MRI scanner level and when the original
raw data is not available. Results: SNR and contrast-to-noise ratio (CNR)
analysis in patient experiments demonstrate an average improvement of 11.7 dB
and 11.2 dB respectively over uncorrected endorectal MRI, and provides strong
performance when compared to existing approaches. Conclusions: A new noise
compensation method was developed for the purpose of improving the quality of
coil intensity corrected endorectal MRI data performed at the MRI scanner
level. We illustrate that promising noise compensation performance can be
achieved for the proposed approach, which is particularly important for
processing coil intensity corrected endorectal MRI data performed at the MRI
scanner level and when the original raw data is not available.Comment: 23 page
MR image reconstruction using deep density priors
Algorithms for Magnetic Resonance (MR) image reconstruction from undersampled
measurements exploit prior information to compensate for missing k-space data.
Deep learning (DL) provides a powerful framework for extracting such
information from existing image datasets, through learning, and then using it
for reconstruction. Leveraging this, recent methods employed DL to learn
mappings from undersampled to fully sampled images using paired datasets,
including undersampled and corresponding fully sampled images, integrating
prior knowledge implicitly. In this article, we propose an alternative approach
that learns the probability distribution of fully sampled MR images using
unsupervised DL, specifically Variational Autoencoders (VAE), and use this as
an explicit prior term in reconstruction, completely decoupling the encoding
operation from the prior. The resulting reconstruction algorithm enjoys a
powerful image prior to compensate for missing k-space data without requiring
paired datasets for training nor being prone to associated sensitivities, such
as deviations in undersampling patterns used in training and test time or coil
settings. We evaluated the proposed method with T1 weighted images from a
publicly available dataset, multi-coil complex images acquired from healthy
volunteers (N=8) and images with white matter lesions. The proposed algorithm,
using the VAE prior, produced visually high quality reconstructions and
achieved low RMSE values, outperforming most of the alternative methods on the
same dataset. On multi-coil complex data, the algorithm yielded accurate
magnitude and phase reconstruction results. In the experiments on images with
white matter lesions, the method faithfully reconstructed the lesions.
Keywords: Reconstruction, MRI, prior probability, machine learning, deep
learning, unsupervised learning, density estimationComment: Published in IEEE TMI. Main text and supplementary material, 19 pages
tota
Direct exoplanet detection and characterization using the ANDROMEDA method: Performance on VLT/NaCo data
Context. The direct detection of exoplanets with high-contrast imaging
requires advanced data processing methods to disentangle potential planetary
signals from bright quasi-static speckles. Among them, angular differential
imaging (ADI) permits potential planetary signals with a known rotation rate to
be separated from instrumental speckles that are either statics or slowly
variable. The method presented in this paper, called ANDROMEDA for ANgular
Differential OptiMal Exoplanet Detection Algorithm is based on a maximum
likelihood approach to ADI and is used to estimate the position and the flux of
any point source present in the field of view. Aims. In order to optimize and
experimentally validate this previously proposed method, we applied ANDROMEDA
to real VLT/NaCo data. In addition to its pure detection capability, we
investigated the possibility of defining simple and efficient criteria for
automatic point source extraction able to support the processing of large
surveys. Methods. To assess the performance of the method, we applied ANDROMEDA
on VLT/NaCo data of TYC-8979-1683-1 which is surrounded by numerous bright
stars and on which we added synthetic planets of known position and flux in the
field. In order to accommodate the real data properties, it was necessary to
develop additional pre-processing and post-processing steps to the initially
proposed algorithm. We then investigated its skill in the challenging case of a
well-known target, Pictoris, whose companion is close to the detection
limit and we compared our results to those obtained by another method based on
principal component analysis (PCA). Results. Application on VLT/NaCo data
demonstrates the ability of ANDROMEDA to automatically detect and characterize
point sources present in the image field. We end up with a robust method
bringing consistent results with a sensitivity similar to the recently
published algorithms, with only two parameters to be fine tuned. Moreover, the
companion flux estimates are not biased by the algorithm parameters and do not
require a posteriori corrections. Conclusions. ANDROMEDA is an attractive
alternative to current standard image processing methods that can be readily
applied to on-sky data
Objectively measuring signal detectability, contrast, blur and noise in medical images using channelized joint observers
ABSTRACT To improve imaging systems and image processing techniques, objective image quality assessment is essential. Model observers adopting a task-based quality assessment strategy by estimating signal detectability measures, have shown to be quite successful to this end. At the same time, costly and time-consuming human observer experiments can be avoided. However, optimizing images in terms of signal detectability alone, still allows a lot of freedom in terms of the imaging parameters. More specifically, fixing the signal detectability defines a manifold in the imaging parameter space on which different “possible” solutions reside. In this article, we present measures that can be used to distinguish these possible solutions from each other, in terms of image quality factors such as signal blur, noise and signal contrast. Our approach is based on an extended channelized joint observer (CJO) that simultaneously estimates the signal amplitude, scale and detectability. As an application, we use this technique to design k-space trajectories for MRI acquisition. Our technique allows to compare the different spiral trajectories in terms of blur, noise and contrast, even when the signal detectability is estimated to be equal
Near-Surface Interface Detection for Coal Mining Applications Using Bispectral Features and GPR
The use of ground penetrating radar (GPR) for detecting the presence of near-surface interfaces is a scenario of special interest to the underground coal mining industry. The problem is difficult to solve in practice because the radar echo from the near-surface interface is often dominated by unwanted components such as antenna crosstalk and ringing, ground-bounce effects, clutter, and severe attenuation. These nuisance components are also highly sensitive to subtle variations in ground conditions, rendering the application of standard signal pre-processing techniques such as background subtraction largely ineffective in the unsupervised case. As a solution to this detection problem, we develop a novel pattern recognition-based algorithm which utilizes a neural network to classify features derived from the bispectrum of 1D early time radar data. The binary classifier is used to decide between two key cases, namely whether an interface is within, for example, 5 cm of the surface or not. This go/no-go detection capability is highly valuable for underground coal mining operations, such as longwall mining, where the need to leave a remnant coal section is essential for geological stability. The classifier was trained and tested using real GPR data with ground truth measurements. The real data was acquired from a testbed with coal-clay, coal-shale and shale-clay interfaces, which represents a test mine site. We show that, unlike traditional second order correlation based methods such as matched filtering which can fail even in known conditions, the new method reliably allows the detection of interfaces using GPR to be applied in the near-surface region. In this work, we are not addressing the problem of depth estimation, rather confining ourselves to detecting an interface within a particular depth range
Recommended from our members
Statistical Region Based Segmentation of Ultrasound Images
Segmentation of ultrasound images is a challenging problem due to speckle, which
corrupts the image and can result in weak or missing image boundaries, poor signal to
noise ratio, and diminished contrast resolution. Speckle is a random interference pattern
that is characterized by an asymmetric distribution as well as significant spatial correla-
tion. These attributes of speckle are challenging to model in a segmentation approach, so
many previous ultrasound segmentation methods simplify the problem by assuming that
the speckle is white and/or Gaussian distributed. Unlike these methods, in this paper
we present an ultrasound-specific segmentation approach that addresses both the spatial
correlation of the data as well as its intensity distribution. We first decorrelate the image
and then apply a region-based active contour whose motion is derived from an appropri-
ate parametric distribution for maximum likelihood image segmentation. We consider
zero-mean complex Gaussian, Rayleigh, and Fisher-Tippett flows, which are designed
to model fully formed speckle in the in-phase/quadrature (IQ), envelope detected, and
display (log compressed) images, respectively. We present experimental results demon-
strating the effectiveness of our method, and compare the results to other parametric
and non-parametric active contours
Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization
Spherical deconvolution (SD) methods are widely used to estimate the
intra-voxel white-matter fiber orientations from diffusion MRI data. However,
while some of these methods assume a zero-mean Gaussian distribution for the
underlying noise, its real distribution is known to be non-Gaussian and to
depend on the methodology used to combine multichannel signals. Indeed, the two
prevailing methods for multichannel signal combination lead to Rician and
noncentral Chi noise distributions. Here we develop a Robust and Unbiased
Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with
realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to
Rician and noncentral Chi likelihood models. To quantify the benefits of using
proper noise models, RUMBA-SD was compared with dRL-SD, a well-established
method based on the RL algorithm for Gaussian noise. Another aim of the study
was to quantify the impact of including a total variation (TV) spatial
regularization term in the estimation framework. To do this, we developed TV
spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The
evaluation was performed by comparing various quality metrics on 132
three-dimensional synthetic phantoms involving different inter-fiber angles and
volume fractions, which were contaminated with noise mimicking patterns
generated by data processing in multichannel scanners. The results demonstrate
that the inclusion of proper likelihood models leads to an increased ability to
resolve fiber crossings with smaller inter-fiber angles and to better detect
non-dominant fibers. The inclusion of TV regularization dramatically improved
the resolution power of both techniques. The above findings were also verified
in brain data
A feasible and automatic free tool for T1 and ECV mapping
Purpose: Cardiac magnetic resonance (CMR) is a useful non-invasive tool for characterizing tissues and detecting myocardial fibrosis and edema. Estimation of extracellular volume fraction (ECV) using T1 sequences is emerging as an accurate biomarker in cardiac diseases associated with diffuse fibrosis. In
this study, automatic software for T1 and ECV map generation consisting of an executable file was developed and validated using phantom and human data.
Methods: T1 mapping was performed in phantoms and 30 subjects (22 patients and 8 healthy subjects) on a 1.5T MR scanner using the modified Look-Locker inversion-recovery (MOLLI) sequence prototype before and 15 min after contrast agent administration. T1 maps were generated using a Fast Nonlinear
Least Squares algorithm. Myocardial ECV maps were generated using both pre- and post-contrast T1 image registration and automatic extraction of blood relaxation rates.
Results: Using our software, pre- and post-contrast T1 maps were obtained in phantoms and healthy subjects resulting in a robust and reliable quantification as compared to reference software. Coregistration of pre- and post-contrast images improved the quality of ECV maps. Mean ECV value in healthy subjects was
24.5% ± 2.5%.
Conclusions: This study demonstrated that it is possible to obtain accurate T1 maps and informative ECV maps using our software. Pixel-wise ECV maps obtained with this automatic software made it possible to visualize and evaluate the extent and severity of ECV alterations
- …