178,024 research outputs found

    Picture: A Probabilistic Programming Language for Scene Perception

    Get PDF
    Recent progress on probabilistic modeling and statistical learning, coupled with the availability of large training datasets, has led to remarkable progress in computer vision. Generative probabilistic models, or “analysis-by-synthesis” approaches, can capture rich scene structure but have been less widely applied than their discriminative counterparts, as they often require considerable problem-specific engineering in modeling and inference, and inference is typically seen as requiring slow, hypothesize-and-test Monte Carlo methods. Here we present Picture, a probabilistic programming language for scene understanding that allows researchers to express complex generative vision models, while automatically solving them using fast general-purpose inference machinery. Picture provides a stochastic scene language that can express generative models for arbitrary 2D/3D scenes, as well as a hierarchy of representation layers for comparing scene hypotheses with observed images by matching not simply pixels, but also more abstract features (e.g., contours, deep neural network activations). Inference can flexibly integrate advanced Monte Carlo strategies with fast bottom-up data-driven methods. Thus both representations and inference strategies can build directly on progress in discriminatively trained systems to make generative vision more robust and efficient. We use Picture to write programs for 3D face analysis, 3D human pose estimation, and 3D object reconstruction – each competitive with specially engineered baselines.Norman B. Leventhal FellowshipUnited States. Office of Naval Research (Award N000141310333)United States. Army Research Office. Multidisciplinary University Research Initiative (W911NF-13-1-2012)National Science Foundation (U.S.). Science and Technology Centers (Center for Brains, Minds and Machines. Award CCF-1231216

    A Conversation with Eugenio Regazzini

    Get PDF
    Eugenio Regazzini was born on August 12, 1946 in Cremona (Italy), and took his degree in 1969 at the University "L. Bocconi" of Milano. He has held positions at the universities of Torino, Bologna and Milano, and at the University "L. Bocconi" as assistant professor and lecturer from 1974 to 1980, and then professor since 1980. He is currently professor in probability and mathematical statistics at the University of Pavia. In the periods 1989-2001 and 2006-2009 he was head of the Institute for Applications of Mathematics and Computer Science of the Italian National Research Council (C.N.R.) in Milano and head of the Department of Mathematics at the University of Pavia, respectively. For twelve years between 1989 and 2006, he served as a member of the Scientific Board of the Italian Mathematical Union (U.M.I.). In 2007, he was elected Fellow of the IMS and, in 2001, Fellow of the "Istituto Lombardo---Accademia di Scienze e Lettere." His research activity in probability and statistics has covered a wide spectrum of topics, including finitely additive probabilities, foundations of the Bayesian paradigm, exchangeability and partial exchangeability, distribution of functionals of random probability measures, stochastic integration, history of probability and statistics. Overall, he has been one of the most authoritative developers of de Finetti's legacy. In the last five years, he has extended his scientific interests to probabilistic methods in mathematical physics; in particular, he has studied the asymptotic behavior of the solutions of equations, which are of interest for the kinetic theory of gases. The present interview was taken in occasion of his 65th birthday.Comment: Published in at http://dx.doi.org/10.1214/11-STS362 the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Distinct counting with a self-learning bitmap

    Full text link
    Counting the number of distinct elements (cardinality) in a dataset is a fundamental problem in database management. In recent years, due to many of its modern applications, there has been significant interest to address the distinct counting problem in a data stream setting, where each incoming data can be seen only once and cannot be stored for long periods of time. Many probabilistic approaches based on either sampling or sketching have been proposed in the computer science literature, that only require limited computing and memory resources. However, the performances of these methods are not scale-invariant, in the sense that their relative root mean square estimation errors (RRMSE) depend on the unknown cardinalities. This is not desirable in many applications where cardinalities can be very dynamic or inhomogeneous and many cardinalities need to be estimated. In this paper, we develop a novel approach, called self-learning bitmap (S-bitmap) that is scale-invariant for cardinalities in a specified range. S-bitmap uses a binary vector whose entries are updated from 0 to 1 by an adaptive sampling process for inferring the unknown cardinality, where the sampling rates are reduced sequentially as more and more entries change from 0 to 1. We prove rigorously that the S-bitmap estimate is not only unbiased but scale-invariant. We demonstrate that to achieve a small RRMSE value of ϵ\epsilon or less, our approach requires significantly less memory and consumes similar or less operations than state-of-the-art methods for many common practice cardinality scales. Both simulation and experimental studies are reported.Comment: Journal of the American Statistical Association (accepted

    A Bayesian approach to data-driven discovery of nonlinear dynamic equations

    Get PDF
    Dynamic equations parameterized by differential equations are used to represent a variety of real-world processes. The equations used to describe these processes are generally derived based on physical principles and a scientific understanding of the process. Statisticians have embedded these physically-inspired differential equations into a probabilistic framework, providing uncertainty quantification to parameter estimates and model specification. These statistical models typically rely on a predefined differential equation or class of models to represent the dynamics of the system. Recently, methods have been developed to discover the governing equation of complex systems. However, these approaches rarely account for uncertainty in the discovered equations, and when uncertainty is accounted for, it is not for the complete system. This dissertation begins with a statistical model for the seasonal temperature cycle over North America, where the dynamics of the system are parameterized by a specified functional form. The model highlights how the seasonal cycle is changing in space and time, motivating the need to better understand the driving mechanisms of such systems. Then, a statistical approach to data-driven discovery is proposed, where uncertainty is incorporated throughout the complete modeling process. The novelty of the approach is the dynamics are treated as a random process, which has not be considered previously in the data-driven discovery literature. The proposed approach sits at the junction between the statistical approach of incorporating dynamic equations in a probabilistic framework and the data-driven discovery methods proposed in computer science, physics, and applied mathematics. The proposed method is put into context within the broader literature, highlighting its contribution to the field of data-driven discovery.Includes bibliographical references

    Capacitated vehicle routing system applying Monte Carlo methods

    Get PDF
    The Vehicle Routing Problem (VRP) is one of the combinatorial optimization problems most studied in Computer Science and of great relevance to the areas of logistics and transport. This paper presents a new algorithm for solving the Capacitated Vehicle Routing Problem (CVRP) using Monte Carlo methods. Monte Carlo methods are statistical methods that use random sampling to solve probabilistic and deterministic problems. The proposed algorithm was developed based on Monte Carlo simulations and Clarke and Wright Savings heuristic and demonstrated results comparable to the best existing algorithms in the literature, it overcomes previous work with Monte Carlo methods. The comparison, analysis and evaluation of the algorithm were based on existing benchmarks in the literature

    Statistical Model Checking : An Overview

    Full text link
    Quantitative properties of stochastic systems are usually specified in logics that allow one to compare the measure of executions satisfying certain temporal properties with thresholds. The model checking problem for stochastic systems with respect to such logics is typically solved by a numerical approach that iteratively computes (or approximates) the exact measure of paths satisfying relevant subformulas; the algorithms themselves depend on the class of systems being analyzed as well as the logic used for specifying the properties. Another approach to solve the model checking problem is to \emph{simulate} the system for finitely many runs, and use \emph{hypothesis testing} to infer whether the samples provide a \emph{statistical} evidence for the satisfaction or violation of the specification. In this short paper, we survey the statistical approach, and outline its main advantages in terms of efficiency, uniformity, and simplicity.Comment: non

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    • …
    corecore