8,856 research outputs found

    Optimization with Discrete Simultaneous Perturbation Stochastic Approximation Using Noisy Loss Function Measurements

    Get PDF
    Discrete stochastic optimization considers the problem of minimizing (or maximizing) loss functions defined on discrete sets, where only noisy measurements of the loss functions are available. The discrete stochastic optimization problem is widely applicable in practice, and many algorithms have been considered to solve this kind of optimization problem. Motivated by the efficient algorithm of simultaneous perturbation stochastic approximation (SPSA) for continuous stochastic optimization problems, we introduce the middle point discrete simultaneous perturbation stochastic approximation (DSPSA) algorithm for the stochastic optimization of a loss function defined on a p-dimensional grid of points in Euclidean space. We show that the sequence generated by DSPSA converges to the optimal point under some conditions. Consistent with other stochastic approximation methods, DSPSA formally accommodates noisy measurements of the loss function. We also show the rate of convergence analysis of DSPSA by solving an upper bound of the mean squared error of the generated sequence. In order to compare the performance of DSPSA with the other algorithms such as the stochastic ruler algorithm (SR) and the stochastic comparison algorithm (SC), we set up a bridge between DSPSA and the other two algorithms by comparing the probability in a big-O sense of not achieving the optimal solution. We show the theoretical and numerical comparison results of DSPSA, SR, and SC. In addition, we consider an application of DSPSA towards developing optimal public health strategies for containing the spread of influenza given limited societal resources

    Probabilistic Constraint Logic Programming

    Full text link
    This paper addresses two central problems for probabilistic processing models: parameter estimation from incomplete data and efficient retrieval of most probable analyses. These questions have been answered satisfactorily only for probabilistic regular and context-free models. We address these problems for a more expressive probabilistic constraint logic programming model. We present a log-linear probability model for probabilistic constraint logic programming. On top of this model we define an algorithm to estimate the parameters and to select the properties of log-linear models from incomplete data. This algorithm is an extension of the improved iterative scaling algorithm of Della-Pietra, Della-Pietra, and Lafferty (1995). Our algorithm applies to log-linear models in general and is accompanied with suitable approximation methods when applied to large data spaces. Furthermore, we present an approach for searching for most probable analyses of the probabilistic constraint logic programming model. This method can be applied to the ambiguity resolution problem in natural language processing applications.Comment: 35 pages, uses sfbart.cl

    Curriculum Guidelines for Undergraduate Programs in Data Science

    Get PDF
    The Park City Math Institute (PCMI) 2016 Summer Undergraduate Faculty Program met for the purpose of composing guidelines for undergraduate programs in Data Science. The group consisted of 25 undergraduate faculty from a variety of institutions in the U.S., primarily from the disciplines of mathematics, statistics and computer science. These guidelines are meant to provide some structure for institutions planning for or revising a major in Data Science

    Play selection in football : a case study in neuro-dynamic programming

    Get PDF
    Includes bibliographical references (p. 34-35).Supported by the US Army Research Office. AASERT-DAAH04-93-GD169Stephen D. Patek, Dimitri P. Bertsekas

    Nudging the particle filter

    Get PDF
    We investigate a new sampling scheme aimed at improving the performance of particle filters whenever (a) there is a significant mismatch between the assumed model dynamics and the actual system, or (b) the posterior probability tends to concentrate in relatively small regions of the state space. The proposed scheme pushes some particles towards specific regions where the likelihood is expected to be high, an operation known as nudging in the geophysics literature. We re-interpret nudging in a form applicable to any particle filtering scheme, as it does not involve any changes in the rest of the algorithm. Since the particles are modified, but the importance weights do not account for this modification, the use of nudging leads to additional bias in the resulting estimators. However, we prove analytically that nudged particle filters can still attain asymptotic convergence with the same error rates as conventional particle methods. Simple analysis also yields an alternative interpretation of the nudging operation that explains its robustness to model errors. Finally, we show numerical results that illustrate the improvements that can be attained using the proposed scheme. In particular, we present nonlinear tracking examples with synthetic data and a model inference example using real-world financial data
    • …
    corecore