19 research outputs found

    Bayesian computation in imaging inverse problems with partially unknown models

    Get PDF
    Many imaging problems require solving a high-dimensional inverse problem that is ill-conditioned or ill-posed. Imaging methods typically address this difficulty by regularising the estimation problem to make it well-posed. This often requires setting the value of the so-called regularisation parameters that control the amount of regularisation enforced. These parameters are notoriously difficult to set a priori and can have a dramatic impact on the recovered estimates. In this thesis, we propose a general empirical Bayesian method for setting regularisation parameters in imaging problems that are convex w.r.t. the unknown image. Our method calibrates regularisation parameters directly from the observed data by maximum marginal likelihood estimation, and can simultaneously estimate multiple regularisation parameters. A main novelty is that this maximum marginal likelihood estimation problem is efficiently solved by using a stochastic proximal gradient algorithm that is driven by two proximal Markov chain Monte Carlo samplers, thus intimately combining modern high-dimensional optimisation and stochastic sampling techniques. Furthermore, the proposed algorithm uses the same basic operators as proximal optimisation algorithms, namely gradient and proximal operators, and it is therefore straightforward to apply to problems that are currently solved by using proximal optimisation techniques. We also present a detailed theoretical analysis of the proposed methodology, and demonstrate it with a range of experiments and comparisons with alternative approaches from the literature. The considered experiments include image denoising, non-blind image deconvolution, and hyperspectral unmixing, using synthesis and analysis priors involving the `1, total-variation, total-variation and `1, and total-generalised-variation pseudo-norms. Moreover, we explore some other applications of the proposed method including maximum marginal likelihood estimation in Bayesian logistic regression and audio compressed sensing, as well as an application to model selection based on residuals

    l0 Sparse signal processing and model selection with applications

    Full text link
    Sparse signal processing has far-reaching applications including compressed sensing, media compression/denoising/deblurring, microarray analysis and medical imaging. The main reason for its popularity is that many signals have a sparse representation given that the basis is suitably selected. However the difficulty lies in developing an efficient method of recovering such a representation. To this aim, two efficient sparse signal recovery algorithms are developed in the first part of this thesis. The first method is based on direct minimization of the l0 norm via cyclic descent, which is called the L0LS-CD (l0 penalized least squares via cyclic descent) algorithm. The other method minimizes smooth approximations of sparsity measures including those of the l0 norm via the majorization minimization (MM) technique, which is called the QC (quadratic concave) algorithm. The L0LS-CD algorithm is developed further by extending it to its multivariate (V-L0LS-CD (vector L0LS-CD)) and group (gL0LS-CD (group L0LS-CD)) regression variants. Computational speed-ups to the basic cyclic descent algorithm are discussed and a greedy version of L0LS-CD is developed. Stability of these algorithms is analyzed and the impact of the penalty parameter and proper initialization on the algorithm performance are highlighted. A suitable method for performance comparison of sparse approximating algorithms in the presence of noise is established. Simulations compare L0LS-CD and V-L0LS-CD with a range of alternatives on under-determined as well as over-determined systems. The QC algorithm is applicable to a class of penalties that are neither convex nor concave but have what we call the quadratic concave property. Convergence proofs of this algorithm are presented and it is compared with the Newton algorithm, concave convex (CC) procedure, as well as with the class of proximity algorithms. Simulations focus on the smooth approximations of the l0 norm and compare them with other l0 denoising algorithms. Next, two applications of sparse modeling are considered. In the first application the L0LS-CD algorithm is extended to recover a sparse transfer function in the presence of coloured noise. The second uses gL0LS-CD to recover the topology of a sparsely connected network of dynamic systems. Both applications use Laguerre basis functions for model expansion. The role of model selection in sparse signal processing is widely neglected in literature. The tuning/penalty parameter of a sparse approximating problem should be selected using a model selection criterion which minimizes a desired discrepancy measure. Compared to the commonly used model selection methods, the SURE (Stein's unbiased risk estimator) estimator stands out as one which does not suffer from the limitations of other methods. Most model selection criterion are developed based on signal or prediction mean squared error. The last section of this thesis develops an SURE criterion instead for parameter mean square error and applies this result to l1 penalized least squares problem with grouped variables. Simulations based on topology identification of a sparse network are presented to illustrate and compare with alternative model selection criteria

    Model Based Principal Component Analysis with Application to Functional Magnetic Resonance Imaging.

    Full text link
    Functional Magnetic Resonance Imaging (fMRI) has allowed better understanding of human brain organization and function by making it possible to record either autonomous or stimulus induced brain activity. After appropriate preprocessing fMRI produces a large spatio-temporal data set, which requires sophisticated signal processing. The aim of the signal processing is usually to produce spatial maps of statistics that capture the effects of interest, e.g., brain activation, time delay between stimulation and activation, or connectivity between brain regions. Two broad signal processing approaches have been pursued; univoxel methods and multivoxel methods. This proposal will focus on multivoxel methods and review Principal Component Analysis (PCA), and other closely related methods, and describe their advantages and disadvantages in fMRI research. These existing multivoxel methods have in common that they are exploratory, i.e., they are not based on a statistical model. A crucial observation which is central to this thesis, is that there is in fact an underlying model behind PCA, which we call noisy PCA (nPCA). In the main part of this thesis, we use nPCA to develop methods that solve three important problems in fMRI. 1) We introduce a novel nPCA based spatio-temporal model that combines the standard univoxel regression model with nPCA and automatically recognizes the temporal smoothness of the fMRI data. Furthermore, unlike standard univoxel methods, it can handle non-stationary noise. 2) We introduce a novel sparse variable PCA (svPCA) method that automatically excludes whole voxel timeseries, and yields sparse eigenimages. This is achieved by a novel nonlinear penalized likelihood function which is optimized. An iterative estimation algorithm is proposed that makes use of geodesic descent methods. 3) We introduce a novel method based on Stein’s Unbiased Risk Estimator (SURE) and Random Matrix Theory (RMT) to select the number of principal components for the increasingly important case where the number of observations is of similar order as the number of variables.Ph.D.Electrical Engineering: SystemsUniversity of Michigan, Horace H. Rackham School of Graduate Studieshttp://deepblue.lib.umich.edu/bitstream/2027.42/57638/2/mulfarss_1.pd

    Learning defects from aircraft NDT data

    Get PDF
    Non-destructive evaluation of aircraft production is optimised and digitalised with Industry 4.0. The aircraft structures produced using fibre metal laminate are traditionally inspected using water-coupled ultrasound scans and manually evaluated. This article proposes Machine Learning models to examine the defects in ultrasonic scans of A380 aircraft components. The proposed approach includes embedded image feature extraction methods and classifiers to learn defects in the scan images. The proposed algorithm is evaluated by benchmarking embedded classifiers and further promoted to research with an industry-based certification process. The HoG-Linear SVM classifier has outperformed SURF-Decision Fine Tree in detecting potential defects. The certification process uses the Probability of Detection function, substantiating that the HoG-Linear SVM classifier detects minor defects. The experimental trials prove that the proposed method will be helpful to examiners in the quality control and assurance of aircraft production, thus leading to significant contributions to non-destructive evaluation 4.0

    Landmark Localization, Feature Matching and Biomarker Discovery from Magnetic Resonance Images

    Get PDF
    The work presented in this thesis proposes several methods that can be roughly divided into three different categories: I) landmark localization in medical images, II) feature matching for image registration, and III) biomarker discovery in neuroimaging. The first part deals with the identification of anatomical landmarks. The motivation stems from the fact that the manual identification and labeling of these landmarks is very time consuming and prone to observer errors, especially when large datasets must be analyzed. In this thesis we present three methods to tackle this challenge: A landmark descriptor based on local self-similarities (SS), a subspace building framework based on manifold learning and a sparse coding landmark descriptor based on data-specific learned dictionary basis. The second part of this thesis deals with finding matching features between a pair of images. These matches can be used to perform a registration between them. Registration is a powerful tool that allows mapping images in a common space in order to aid in their analysis. Accurate registration can be challenging to achieve using intensity based registration algorithms. Here, a framework is proposed for learning correspondences in pairs of images by matching SS features and random sample and consensus (RANSAC) is employed as a robust model estimator to learn a deformation model based on feature matches. Finally, the third part of the thesis deals with biomarker discovery using machine learning. In this section a framework for feature extraction from learned low-dimensional subspaces that represent inter-subject variability is proposed. The manifold subspace is built using data-driven regions of interest (ROI). These regions are learned via sparse regression, with stability selection. Also, probabilistic distribution models for different stages in the disease trajectory are estimated for different class populations in the low-dimensional manifold and used to construct a probabilistic scoring function.Open Acces

    Advances in Analysis and Exploration in Medical Imaging

    Get PDF
    With an ever increasing life expectancy, we see a concomitant increase in diseases capable of disrupting normal cognitive processes. Their diagnoses are difficult, and occur usually after daily living activities have already been compromised. This dissertation proposes machine learning methods for the study of the neurological implications of brain lesions. It addresses the analysis and exploration of medical imaging data, with particular emphasis to (f)MRI. Two main research directions are proposed. In the first, a brain tissue segmentation approach is detailed. In the second, a document mining framework, applied to reports of neuroscientific studies, is described. Both directions are based on retrieving consistent information from multi-modal data. A contribution in this dissertation is the application of a semi-supervised method, discriminative clustering, to identify different brain tissues and their partial volume information. The proposed method relies on variations of tissue distributions in multi-spectral MRI, and reduces the need for a priori information. This methodology was successfully applied to the study of multiple sclerosis and age related white matter diseases. It was also showed that early-stage changes of normal-appearing brain tissue can already predict decline in certain cognitive processes. Another contribution in this dissertation is in neuroscience meta-research. One limitation in neuroimage processing relates to data availability. Through document mining of neuroscientific reports, using images as source of information, one can harvest research results dealing with brain lesions. The context of such results can be extracted from textual information, allowing for an intelligent categorisation of images. This dissertation proposes new principles, and a combination of several techniques to the study of published fMRI reports. These principles are based on a number of distance measures, to compare various brain activity sites. Application to studies of the default mode network validated the proposed approach. The aforementioned methodologies rely on clustering approaches. When dealing with such strategies, most results depend on the choice of initialisation and parameter settings. By defining distance measures that search for clusters of consistent elements, one can estimate a degree of reliability for each data grouping. In this dissertation, it is shown that such principles can be applied to multiple runs of various clustering algorithms, allowing for a more robust estimation of data agglomeration

    Adaptive processing of thin structures to augment segmentation of dual-channel structural MRI of the human brain

    Get PDF
    This thesis presents a method for the segmentation of dual-channel structural magnetic resonance imaging (MRI) volumes of the human brain into four tissue classes. The state-of-the-art FSL FAST segmentation software (Zhang et al., 2001) is in widespread clinical use, and so it is considered a benchmark. A significant proportion of FAST’s errors has been shown to be localised to cortical sulci and blood vessels; this issue has driven the developments in this thesis, rather than any particular clinical demand. The original theme lies in preserving and even restoring these thin structures, poorly resolved in typical clinical MRI. Bright plate-shaped sulci and dark tubular vessels are best contrasted from the other tissues using the T2- and PD-weighted data, respectively. A contrasting tube detector algorithm (based on Frangi et al., 1998) was adapted to detect both structures, with smoothing (based on Westin and Knutsson, 2006) of an intermediate tensor representation to ensure smoothness and fuller coverage of the maps. The segmentation strategy required the MRI volumes to be upscaled to an artificial high resolution where a small partial volume label set would be valid and the segmentation process would be simplified. A resolution enhancement process (based on Salvado et al., 2006) was significantly modified to smooth homogeneous regions and sharpen their boundaries in dual-channel data. In addition, it was able to preserve the mapped thin structures’ intensities or restore them to pure tissue values. Finally, the segmentation phase employed a relaxation-based labelling optimisation process (based on Li et al., 1997) to improve accuracy, rather than more efficient greedy methods which are typically used. The thin structure location prior maps and the resolution-enhanced data also helped improve the labelling accuracy, particularly around sulci and vessels. Testing was performed on the aged LBC1936 clinical dataset and on younger brain volumes acquired at the SHEFC Brain Imaging Centre (Western General Hospital, Edinburgh, UK), as well as the BrainWeb phantom. Overall, the proposed methods rivalled and often improved segmentation accuracy compared to FAST, where the ground truth was produced by a radiologist using software designed for this project. The performance in pathological and atrophied brain volumes, and the differences with the original segmentation algorithm on which it was based (van Leemput et al., 2003), were also examined. Among the suggestions for future development include a soft labelling consensus formation framework to mitigate rater bias in the ground truth, and contour-based models of the brain parenchyma to provide additional structural constraints

    Superresolution Reconstruction for Magnetic Resonance Spectroscopic Imaging Exploiting Low-Rank Spatio-Spectral Structure

    Get PDF
    Magnetic resonance spectroscopic imaging (MRSI) is a rapidly developing medical imaging modality, capable of conferring both spatial and spectral information content, and has become a powerful clinical tool. The ability to non-invasively observe spatial maps of metabolite concentrations, for instance, in the human brain, can offer functional, as well as pathological insights, perhaps even before structural aberrations or behavioral symptoms are evinced. Despite its lofty clinical prospects, MRSI has traditionally remained encumbered by a number of practical limitations. Of primary concern are the vastly reduced concentrations of tissue metabolites when compared to that of water, which forms the basis for conventional MR imaging. Moreover, the protracted exam durations required by MRSI routinely approach the limits for patient compliance. Taken in conjunction, the above considerations effectively circumscribe the data collection process, ultimately translating to coarse image resolutions that are of diminished clinical utility. Such shortcomings are compounded by spectral contamination artifacts due to the system pointspread function, which arise as a natural consequence when reconstructing non-band-limited data by the inverse Fourier transform. These artifacts are especially pronounced near regions characterized by substantial discrepancies in signal intensity, for example, the interface between normal brain and adipose tissue, whereby the metabolite signals are inundated by the dominant lipid resonances. In recent years, concerted efforts have been made to develop alternative, non-Fourier MRSI reconstruction strategies that aim to surmount the aforementioned limitations. In this dissertation, we build upon the burgeoning medley of innovative and promising techniques, proffering a novel superresolution reconstruction framework predicated on the recent interest in low-rank signal modeling, along with state-of-the-art regularization methods. The proposed framework is founded upon a number of key tenets. Firstly, we proclaim that the underlying spatio-spectral distribution of the investigated object admits a bilinear representation, whereby spatial and spectral signal components can be effectively segregated. We further maintain that the dimensionality of the subspace spanned by the components is, in principle, bounded by a modest number of observable metabolites. Secondly, we assume that local susceptibility effects represent the primary sources of signal corruption that tend to disallow such representations. Finally, we assert that the spatial components belong to a class of real-valued, non-negative, and piecewise linear functions, compelled in part through the use of a total variation regularization penalty. After demonstrating superior spatial and spectral localization properties in both numerical and physical phantom data when compared against standard Fourier methods, we proceed to evaluate reconstruction performance in typical in vivo settings, whereby the method is extended in order to promote the recovery of signal variations throughout the MRSI slice thickness. Aside from the various technical obstacles, one of the cardinal prospective challenges for high-resolution MRSI reconstruction is the shortfall of reliable ground truth data prudent for validation, thereby prompting reservations surrounding the resulting experimental outcomes. [...
    corecore