2,439 research outputs found

    Robot docking using mixtures of Gaussians

    Get PDF
    This paper applies the Mixture of Gaussians probabilistic model, combined with Expectation Maximization optimization to the task of summarizing three dimensionals range data for the mobile robot. This provides a flexible way of dealing with uncertainties in sensor information, and allows the introduction of prior knowledge into low-level perception modules. Problems with the basic approach were solved in several ways: the mixture of Gaussians was reparameterized to reflect the types of objects expected in the scene, and priors on model parameters were included in the optimization process. Both approaches force the optimization to find 'interesting' objects, given the sensor and object characteristics. A higher level classifier was used to interpret the results provided by the model, and to reject spurious solutions

    Optimization and Abstraction: A Synergistic Approach for Analyzing Neural Network Robustness

    Full text link
    In recent years, the notion of local robustness (or robustness for short) has emerged as a desirable property of deep neural networks. Intuitively, robustness means that small perturbations to an input do not cause the network to perform misclassifications. In this paper, we present a novel algorithm for verifying robustness properties of neural networks. Our method synergistically combines gradient-based optimization methods for counterexample search with abstraction-based proof search to obtain a sound and ({\delta}-)complete decision procedure. Our method also employs a data-driven approach to learn a verification policy that guides abstract interpretation during proof search. We have implemented the proposed approach in a tool called Charon and experimentally evaluated it on hundreds of benchmarks. Our experiments show that the proposed approach significantly outperforms three state-of-the-art tools, namely AI^2 , Reluplex, and Reluval

    Using Prior Knowledge and Learning from Experience in Estimation of Distribution Algorithms

    Get PDF
    Estimation of distribution algorithms (EDAs) are stochastic optimization techniques that explore the space of potential solutions by building and sampling explicit probabilistic models of promising candidate solutions. One of the primary advantages of EDAs over many other stochastic optimization techniques is that after each run they leave behind a sequence of probabilistic models describing useful decompositions of the problem. This sequence of models can be seen as a roadmap of how the EDA solves the problem. While this roadmap holds a great deal of information about the problem, until recently this information has largely been ignored. My thesis is that it is possible to exploit this information to speed up problem solving in EDAs in a principled way. The main contribution of this dissertation will be to show that there are multiple ways to exploit this problem-specific knowledge. Most importantly, it can be done in a principled way such that these methods lead to substantial speedups without requiring parameter tuning or hand-inspection of models
    corecore