9,739 research outputs found

    Generalizing Informed Sampling for Asymptotically Optimal Sampling-based Kinodynamic Planning via Markov Chain Monte Carlo

    Full text link
    Asymptotically-optimal motion planners such as RRT* have been shown to incrementally approximate the shortest path between start and goal states. Once an initial solution is found, their performance can be dramatically improved by restricting subsequent samples to regions of the state space that can potentially improve the current solution. When the motion planning problem lies in a Euclidean space, this region XinfX_{inf}, called the informed set, can be sampled directly. However, when planning with differential constraints in non-Euclidean state spaces, no analytic solutions exists to sampling XinfX_{inf} directly. State-of-the-art approaches to sampling XinfX_{inf} in such domains such as Hierarchical Rejection Sampling (HRS) may still be slow in high-dimensional state space. This may cause the planning algorithm to spend most of its time trying to produces samples in XinfX_{inf} rather than explore it. In this paper, we suggest an alternative approach to produce samples in the informed set XinfX_{inf} for a wide range of settings. Our main insight is to recast this problem as one of sampling uniformly within the sub-level-set of an implicit non-convex function. This recasting enables us to apply Monte Carlo sampling methods, used very effectively in the Machine Learning and Optimization communities, to solve our problem. We show for a wide range of scenarios that using our sampler can accelerate the convergence rate to high-quality solutions in high-dimensional problems

    fMRI activation detection with EEG priors

    Get PDF
    The purpose of brain mapping techniques is to advance the understanding of the relationship between structure and function in the human brain in so-called activation studies. In this work, an advanced statistical model for combining functional magnetic resonance imaging (fMRI) and electroencephalography (EEG) recordings is developed to fuse complementary information about the location of neuronal activity. More precisely, a new Bayesian method is proposed for enhancing fMRI activation detection by the use of EEG-based spatial prior information in stimulus based experimental paradigms. I.e., we model and analyse stimulus influence by a spatial Bayesian variable selection scheme, and extend existing high-dimensional regression methods by incorporating prior information on binary selection indicators via a latent probit regression with either a spatially-varying or constant EEG effect. Spatially-varying effects are regularized by intrinsic Markov random field priors. Inference is based on a full Bayesian Markov Chain Monte Carlo (MCMC) approach. Whether the proposed algorithm is able to increase the sensitivity of mere fMRI models is examined in both a real-world application and a simulation study. We observed, that carefully selected EEG--prior information additionally increases sensitivity in activation regions that have been distorted by a low signal-to-noise ratio

    Discussion of "Geodesic Monte Carlo on Embedded Manifolds"

    Full text link
    Contributed discussion and rejoinder to "Geodesic Monte Carlo on Embedded Manifolds" (arXiv:1301.6064)Comment: Discussion of arXiv:1301.6064. To appear in the Scandinavian Journal of Statistics. 18 page

    Segmentation of skin lesions in 2D and 3D ultrasound images using a spatially coherent generalized Rayleigh mixture model

    Get PDF
    This paper addresses the problem of jointly estimating the statistical distribution and segmenting lesions in multiple-tissue high-frequency skin ultrasound images. The distribution of multiple-tissue images is modeled as a spatially coherent finite mixture of heavy-tailed Rayleigh distributions. Spatial coherence inherent to biological tissues is modeled by enforcing local dependence between the mixture components. An original Bayesian algorithm combined with a Markov chain Monte Carlo method is then proposed to jointly estimate the mixture parameters and a label-vector associating each voxel to a tissue. More precisely, a hybrid Metropolis-within-Gibbs sampler is used to draw samples that are asymptotically distributed according to the posterior distribution of the Bayesian model. The Bayesian estimators of the model parameters are then computed from the generated samples. Simulation results are conducted on synthetic data to illustrate the performance of the proposed estimation strategy. The method is then successfully applied to the segmentation of in vivo skin tumors in high-frequency 2-D and 3-D ultrasound images

    Learning-based attacks in cyber-physical systems

    Get PDF
    We introduce the problem of learning-based attacks in a simple abstraction of cyber-physical systems---the case of a discrete-time, linear, time-invariant plant that may be subject to an attack that overrides the sensor readings and the controller actions. The attacker attempts to learn the dynamics of the plant and subsequently override the controller's actuation signal, to destroy the plant without being detected. The attacker can feed fictitious sensor readings to the controller using its estimate of the plant dynamics and mimic the legitimate plant operation. The controller, on the other hand, is constantly on the lookout for an attack; once the controller detects an attack, it immediately shuts the plant off. In the case of scalar plants, we derive an upper bound on the attacker's deception probability for any measurable control policy when the attacker uses an arbitrary learning algorithm to estimate the system dynamics. We then derive lower bounds for the attacker's deception probability for both scalar and vector plants by assuming a specific authentication test that inspects the empirical variance of the system disturbance. We also show how the controller can improve the security of the system by superimposing a carefully crafted privacy-enhancing signal on top of the "nominal control policy." Finally, for nonlinear scalar dynamics that belong to the Reproducing Kernel Hilbert Space (RKHS), we investigate the performance of attacks based on nonlinear Gaussian-processes (GP) learning algorithms

    Approximate Inference for Constructing Astronomical Catalogs from Images

    Full text link
    We present a new, fully generative model for constructing astronomical catalogs from optical telescope image sets. Each pixel intensity is treated as a random variable with parameters that depend on the latent properties of stars and galaxies. These latent properties are themselves modeled as random. We compare two procedures for posterior inference. One procedure is based on Markov chain Monte Carlo (MCMC) while the other is based on variational inference (VI). The MCMC procedure excels at quantifying uncertainty, while the VI procedure is 1000 times faster. On a supercomputer, the VI procedure efficiently uses 665,000 CPU cores to construct an astronomical catalog from 50 terabytes of images in 14.6 minutes, demonstrating the scaling characteristics necessary to construct catalogs for upcoming astronomical surveys.Comment: accepted to the Annals of Applied Statistic

    New product development in an emerging economy: analysing the role of supplier involvement practices by using Bayesian Markov chain Monte Carlo technique

    Get PDF
    The research question is whether the positive relationship found between supplier involvement practices and new product development performances in developed economies also holds in emerging economies. The role of supplier involvement practices in new product development performance is yet to be substantially investigated in the emerging economies (other than China). This premise was examined by distributing a survey instrument (Jayaram’s (2008) published survey instrument that has been utilised in developed economies) to Malaysian manufacturing companies. To gauge the relationship between the supplier involvement practices and new product development (NPD) project performance of 146 companies, structural equation modelling was adopted. Our findings prove that supplier involvement practices have a significant positive impact on NPD project performance in an emerging economy with respect to quality objectives, design objectives, cost objectives, and “time-to-market” objectives. Further analysis using the Bayesian Markov Chain Monte Carlo algorithm, yielding a more credible and feasible differentiation, confirmed these results (even in the case of an emerging economy) and indicated that these practices have a 28% impact on variance of NPD project performance. This considerable effect implies that supplier involvement is a must have, although further research is needed to identify the contingencies for its practices

    Bayesian inference and non-linear extensions of the CIRCE method for quantifying the uncertainty of closure relationships integrated into thermal-hydraulic system codes

    Full text link
    Uncertainty Quantification of closure relationships integrated into thermal-hydraulic system codes is a critical prerequisite in applying the Best-Estimate Plus Uncertainty (BEPU) methodology for nuclear safety and licensing processes.The purpose of the CIRCE method is to estimate the (log)-Gaussian probability distribution of a multiplicative factor applied to a reference closure relationship in order to assess its uncertainty. Even though this method has been implemented with success in numerous physical scenarios, it can still suffer from substantial limitations such as the linearity assumption and the difficulty of properly taking into account the inherent statistical uncertainty. In the paper, we will extend the CIRCE method in two aspects. On the one hand, we adopt the Bayesian setting putting prior probability distributions on the parameters of the (log)-Gaussian distribution. The posterior distribution of the parameters is then computed with respect to an experimental database by means of Markov Chain Monte Carlo (MCMC) algorithms. On the other hand, we tackle the more general setting where the simulations do not move linearly against the multiplicative factor(s). MCMC algorithms then become time-prohibitive when the thermal-hydraulic simulations exceed a few minutes. This handicap is overcome by using Gaussian process (GP) emulators which can yield both reliable and fast predictions of the simulations. The GP-based MCMC algorithms will be applied to quantify the uncertainty of two condensation closure relationships at a safety injection with respect to a database of experimental tests. The thermal-hydraulic simulations will be run with the CATHARE 2 computer code.Comment: 37 pages, 5 figure
    corecore