111 research outputs found

    Classification of chirp signals using hierarchical bayesian learning and MCMC methods

    Get PDF
    This paper addresses the problem of classifying chirp signals using hierarchical Bayesian learning together with Markov chain Monte Carlo (MCMC) methods. Bayesian learning consists of estimating the distribution of the observed data conditional on each class from a set of training samples. Unfortunately, this estimation requires to evaluate intractable multidimensional integrals. This paper studies an original implementation of hierarchical Bayesian learning that estimates the class conditional probability densities using MCMC methods. The performance of this implementation is first studied via an academic example for which the class conditional densities are known. The problem of classifying chirp signals is then addressed by using a similar hierarchical Bayesian learning implementation based on a Metropolis-within-Gibbs algorithm

    Generative Supervised Classification Using Dirichlet Process Priors.

    Get PDF
    Choosing the appropriate parameter prior distributions associated to a given Bayesian model is a challenging problem. Conjugate priors can be selected for simplicity motivations. However, conjugate priors can be too restrictive to accurately model the available prior information. This paper studies a new generative supervised classifier which assumes that the parameter prior distributions conditioned on each class are mixtures of Dirichlet processes. The motivations for using mixtures of Dirichlet processes is their known ability to model accurately a large class of probability distributions. A Monte Carlo method allowing one to sample according to the resulting class-conditional posterior distributions is then studied. The parameters appearing in the class-conditional densities can then be estimated using these generated samples (following Bayesian learning). The proposed supervised classifier is applied to the classification of altimetric waveforms backscattered from different surfaces (oceans, ices, forests, and deserts). This classification is a first step before developing tools allowing for the extraction of useful geophysical information from altimetric waveforms backscattered from nonoceanic surfaces

    Sub-aperture SAR Imaging with Uncertainty Quantification

    Full text link
    In the problem of spotlight mode airborne synthetic aperture radar (SAR) image formation, it is well-known that data collected over a wide azimuthal angle violate the isotropic scattering property typically assumed. Many techniques have been proposed to account for this issue, including both full-aperture and sub-aperture methods based on filtering, regularized least squares, and Bayesian methods. A full-aperture method that uses a hierarchical Bayesian prior to incorporate appropriate speckle modeling and reduction was recently introduced to produce samples of the posterior density rather than a single image estimate. This uncertainty quantification information is more robust as it can generate a variety of statistics for the scene. As proposed, the method was not well-suited for large problems, however, as the sampling was inefficient. Moreover, the method was not explicitly designed to mitigate the effects of the faulty isotropic scattering assumption. In this work we therefore propose a new sub-aperture SAR imaging method that uses a sparse Bayesian learning-type algorithm to more efficiently produce approximate posterior densities for each sub-aperture window. These estimates may be useful in and of themselves, or when of interest, the statistics from these distributions can be combined to form a composite image. Furthermore, unlike the often-employed lp-regularized least squares methods, no user-defined parameters are required. Application-specific adjustments are made to reduce the typically burdensome runtime and storage requirements so that appropriately large images can be generated. Finally, this paper focuses on incorporating these techniques into SAR image formation process. That is, for the problem starting with SAR phase history data, so that no additional processing errors are incurred

    A measure-theoretic variational Bayesian algorithm for large dimensional problems

    No full text
    International audienceIn this paper we provide an algorithm allowing to solve the variational Bayesian issue as a functional optimization problem. The main contribution of this paper is to transpose a classical iterative algorithm of optimization in the metric space of probability densities involved in the Bayesian methodology. The main advantage of this methodology is that it allows to address large dimensional inverse problems by unsupervised algorithms. The interest of our algorithm is enhanced by its application to large dimensional linear inverse problems involving sparse objects. Finally,we provide simulation results. First we show the good numerical performances of our method by comparing it with classical ones on a small tomographic problem. On a second time we treat a large dimensional dictionary learning problem and compare our method with a wavelet based one

    Statistical signal processing for echo signals from ultrasound linear and nonlinear scatterers

    Get PDF

    A Study of Mexican Free-Tailed Bat Chirp Syllables: Bayesian Functional Mixed Models for Nonstationary Acoustic Time Series

    Get PDF
    Abstract We describe a new approach to analyze chirp syllables of free-tailed bats from two regions of Texas in which they are predominant: Austin and College Station. Our goal is to characterize any systematic regional differences in the mating chirps and assess whether individual bats have signature chirps. The data are analyzed by modeling spectrograms of the chirps as responses in a Bayesian functional mixed model. Given the variable chirp lengths, we compute the spectrograms on a relative time scale interpretable as the relative chirp position, using a variable window overlap based on chirp length. We use 2D wavelet transforms to capture correlation within the spectrogram in our modeling and obtain adaptive regularization of the estimates and inference for the regions-specific spectrograms. Our model includes random effect spectrograms at the bat level to account for correlation among chirps from the same bat, and to assess relative variability in chirp spectrograms within and between bats. The modeling of spectrograms using functional mixed models is a general approach for the analysis of replicated nonstationary time series, such as our acoustical signals, to relate aspects of the signals to various predictors, while accounting for between-signal structure. This can be done on raw spectrograms when all signals are of the same length, and can be done using spectrograms defined on a relative time scale for signals of variable length in settings where the idea of defining correspondence across signals based on relative position is sensible

    Measuring the Population Properties of Merging Compact Binaries with Gravitational Wave Observations

    Get PDF
    Since the Laser Interferometer Gravitational-Wave Observatory (LIGO) made the first direct detection of gravitational waves in 2015, the era of gravitational wave astronomy has begun. LIGO and its counterpart Virgo are detecting an ever-growing sample of merging compact binaries: binary black holes, binary neutron stars, and neutron star--black hole binaries. Each individual detection can be compared against simulated signals with known properties, in order to measure the binary\u27s properties. In order to understand the sample of detections as a whole, however, ensemble methods are needed. The properties measured from these binary systems have large measurement errors, and the sensitivity of gravitational wave detectors are highly property-dependent, resulting in large selection biases. This dissertation applies the technique of hierarchical Bayesian modeling in order to constrain the underlying, unbiased population of merging compact binaries. We use a number of models to constrain the merger rate, mass distribution, and spin distribution for binary black holes and binary neutron stars. We also use tidal information present in binary neutron stars in order to self-consistently constrain the nuclear equation of state

    DETECTION AND INFERENCE IN GRAVITATIONAL WAVE ASTRONOMY

    Get PDF
    We explore the detection and astrophysical modeling of gravitational waves de- tected by the Advanced Laser Interferometer Gravitational wave Observatory (LIGO) and Virgo. We discuss the techniques used in the PyCBC search pipeline to discover the first gravitational wave detection GW150914, and estimate the statistical signifi- cance of GW150914, and the marginal trigger LVT151012. During Advanced LIGO’s first observing run there were no detections of mergers from binary neutron star and neutron star-black hole binaries. We use Bayesian inference to place upper limits on the rate of coalescence of these binaries. We use developments made in the PyCBC search pipeline during Advanced LIGO and Virgo’s second observing run to re-analyze Advanced LIGO’s first observing run and re-estimate the statistical significance of LVT151012. We present sufficient evidence to claim LVT151012 as a gravitational wave event. In Advanced LIGO and Virgo’s 2nd observing run a gravitational wave due to the merger of two binary neutron stars, known as GW170817, was discov- ered. We develop tools for Bayesian hypothesis testing so that we can investigate the interior dynamics of neutron stars using the GW170817 signal. Finally, we use Bayesian parameter estimation from PyCBC with tools of Bayesian hypothesis testing to investigate the presence of nonlinear tidal dynamics from a pressure – gravity mode instability in GW170817. We find that significant waveform degeneracies allow the effect of nonlinear tides to be compatible with the data at the level of nonsignificance (Bayes factor of unity). We also investigate further constraints on these nonlinear tides
    • 

    corecore