280 research outputs found

    Bayesian Nonparametric Unmixing of Hyperspectral Images

    Full text link
    Hyperspectral imaging is an important tool in remote sensing, allowing for accurate analysis of vast areas. Due to a low spatial resolution, a pixel of a hyperspectral image rarely represents a single material, but rather a mixture of different spectra. HSU aims at estimating the pure spectra present in the scene of interest, referred to as endmembers, and their fractions in each pixel, referred to as abundances. Today, many HSU algorithms have been proposed, based either on a geometrical or statistical model. While most methods assume that the number of endmembers present in the scene is known, there is only little work about estimating this number from the observed data. In this work, we propose a Bayesian nonparametric framework that jointly estimates the number of endmembers, the endmembers itself, and their abundances, by making use of the Indian Buffet Process as a prior for the endmembers. Simulation results and experiments on real data demonstrate the effectiveness of the proposed algorithm, yielding results comparable with state-of-the-art methods while being able to reliably infer the number of endmembers. In scenarios with strong noise, where other algorithms provide only poor results, the proposed approach tends to overestimate the number of endmembers slightly. The additional endmembers, however, often simply represent noisy replicas of present endmembers and could easily be merged in a post-processing step

    Compressed Sensing for Big Data Over Complex Networks

    Get PDF
    Transductive semi-supervised learning methods aim at automatically labeling large datasets by leveraging information provided by few manually labeled data points and the intrinsic structure of the dataset. Many such methods based on a graph signal representation of a dataset have been proposed, in which the nodes correspond to the data points, the edges connect similar points, and the graph signal is the mapping between the nodes and the labels. Most of the existing methods use deterministic signal models and try to recover the graph signal using a regularized or constrained convex optimization approach, where the regularization/constraint term enforce some sort of smoothness of the graph signal. This thesis takes a different route and investigates a probabilistic graphical modeling approach in which the graph signal is considered a Markov random field defined over the underlying network structure. The measurement process, modeling the initial manually obtained labels, and smoothness assumptions are imposed by a probability distribution defined over the Markov network corresponding to the data graph. Various approximate inference methods such as loopy belief propagation and the mean field methods are studied by means of numerical experiments involving both synthetic and real-world datasets

    Vision-Aided Navigation for GPS-Denied Environments Using Landmark Feature Identification

    Get PDF
    In recent years, unmanned autonomous vehicles have been used in diverse applications because of their multifaceted capabilities. In most cases, the navigation systems for these vehicles are dependent on Global Positioning System (GPS) technology. Many applications of interest, however, entail operations in environments in which GPS is intermittent or completely denied. These applications include operations in complex urban or indoor environments as well as missions in adversarial environments where GPS might be denied using jamming technology. This thesis investigate the development of vision-aided navigation algorithms that utilize processed images from a monocular camera as an alternative to GPS. The vision-aided navigation approach explored in this thesis entails defining a set of inertial landmarks, the locations of which are known within the environment, and employing image processing algorithms to detect these landmarks in image frames collected from an onboard monocular camera. These vision-based landmark measurements effectively serve as surrogate GPS measurements that can be incorporated into a navigation filter. Several image processing algorithms were considered for landmark detection and this thesis focuses in particular on two approaches: the continuous adaptive mean shift (CAMSHIFT) algorithm and the adaptable compressive (ADCOM) tracking algorithm. These algorithms are discussed in detail and applied for the detection and tracking of landmarks in monocular camera images. Navigation filters are then designed that employ sensor fusion of accelerometer and rate gyro data from an inertial measurement unit (IMU) with vision-based measurements of the centroids of one or more landmarks in the scene. These filters are tested in simulated navigation scenarios subject to varying levels of sensor and measurement noise and varying number of landmarks. Finally, conclusions and recommendations are provided regarding the implementation of this vision-aided navigation approach for autonomous vehicle navigation systems

    Expectation-Maximization Gaussian-Mixture Approximate Message Passing

    Full text link
    When recovering a sparse signal from noisy compressive linear measurements, the distribution of the signal's non-zero coefficients can have a profound effect on recovery mean-squared error (MSE). If this distribution was apriori known, then one could use computationally efficient approximate message passing (AMP) techniques for nearly minimum MSE (MMSE) recovery. In practice, though, the distribution is unknown, motivating the use of robust algorithms like LASSO---which is nearly minimax optimal---at the cost of significantly larger MSE for non-least-favorable distributions. As an alternative, we propose an empirical-Bayesian technique that simultaneously learns the signal distribution while MMSE-recovering the signal---according to the learned distribution---using AMP. In particular, we model the non-zero distribution as a Gaussian mixture, and learn its parameters through expectation maximization, using AMP to implement the expectation step. Numerical experiments on a wide range of signal classes confirm the state-of-the-art performance of our approach, in both reconstruction error and runtime, in the high-dimensional regime, for most (but not all) sensing operators

    Pseudo-Marginal MCMC for Parameter Estimation in α-Stable Distributions

    Get PDF
    The α-stable distribution is very useful for modelling data with extreme values and skewed behaviour. The distribution is governed by two key parameters, tail thickness and skewness, in addition to scale and location. Inferring these parameters is difficult due to the lack of a closed form expression of the probability density. We develop a Bayesian method, based on the pseudo-marginal MCMC approach, that requires only unbiased estimates of the intractable likelihood. To compute these estimates we build an adaptive importance sampler for a latentvariable-representation of the α-stable density. This representation has previously been used in the literature for conditional MCMC sampling of the parameters, and we compare our method with this approach.This is the author accepted manuscript. The final version is available from Elsevier via http://dx.doi.org/10.1016/j.ifacol.2015.12.17

    Machine Learning for Fluid Mechanics

    Full text link
    The field of fluid mechanics is rapidly advancing, driven by unprecedented volumes of data from field measurements, experiments and large-scale simulations at multiple spatiotemporal scales. Machine learning offers a wealth of techniques to extract information from data that could be translated into knowledge about the underlying fluid mechanics. Moreover, machine learning algorithms can augment domain knowledge and automate tasks related to flow control and optimization. This article presents an overview of past history, current developments, and emerging opportunities of machine learning for fluid mechanics. It outlines fundamental machine learning methodologies and discusses their uses for understanding, modeling, optimizing, and controlling fluid flows. The strengths and limitations of these methods are addressed from the perspective of scientific inquiry that considers data as an inherent part of modeling, experimentation, and simulation. Machine learning provides a powerful information processing framework that can enrich, and possibly even transform, current lines of fluid mechanics research and industrial applications.Comment: To appear in the Annual Reviews of Fluid Mechanics, 202

    Efficient uncertainty quantification methodologies for high-dimensional climate land models

    Full text link
    corecore