73 research outputs found

    Semiparametric inference in mixture models with predictive recursion marginal likelihood

    Full text link
    Predictive recursion is an accurate and computationally efficient algorithm for nonparametric estimation of mixing densities in mixture models. In semiparametric mixture models, however, the algorithm fails to account for any uncertainty in the additional unknown structural parameter. As an alternative to existing profile likelihood methods, we treat predictive recursion as a filter approximation to fitting a fully Bayes model, whereby an approximate marginal likelihood of the structural parameter emerges and can be used for inference. We call this the predictive recursion marginal likelihood. Convergence properties of predictive recursion under model mis-specification also lead to an attractive construction of this new procedure. We show pointwise convergence of a normalized version of this marginal likelihood function. Simulations compare the performance of this new marginal likelihood approach that of existing profile likelihood methods as well as Dirichlet process mixtures in density estimation. Mixed-effects models and an empirical Bayes multiple testing application in time series analysis are also considered

    Data Assimilation: A Mathematical Introduction

    Full text link
    These notes provide a systematic mathematical treatment of the subject of data assimilation

    Learning the Structure and Parameters of Large-Population Graphical Games from Behavioral Data

    Full text link
    We consider learning, from strictly behavioral data, the structure and parameters of linear influence games (LIGs), a class of parametric graphical games introduced by Irfan and Ortiz (2014). LIGs facilitate causal strategic inference (CSI): Making inferences from causal interventions on stable behavior in strategic settings. Applications include the identification of the most influential individuals in large (social) networks. Such tasks can also support policy-making analysis. Motivated by the computational work on LIGs, we cast the learning problem as maximum-likelihood estimation (MLE) of a generative model defined by pure-strategy Nash equilibria (PSNE). Our simple formulation uncovers the fundamental interplay between goodness-of-fit and model complexity: good models capture equilibrium behavior within the data while controlling the true number of equilibria, including those unobserved. We provide a generalization bound establishing the sample complexity for MLE in our framework. We propose several algorithms including convex loss minimization (CLM) and sigmoidal approximations. We prove that the number of exact PSNE in LIGs is small, with high probability; thus, CLM is sound. We illustrate our approach on synthetic data and real-world U.S. congressional voting records. We briefly discuss our learning framework's generality and potential applicability to general graphical games.Comment: Journal of Machine Learning Research. (accepted, pending publication.) Last conference version: submitted March 30, 2012 to UAI 2012. First conference version: entitled, Learning Influence Games, initially submitted on June 1, 2010 to NIPS 201

    Stochastic Event-Based Control and Estimation

    Get PDF
    Digital controllers are traditionally implemented using periodic sampling, computation, and actuation events. As more control systems are implemented to share limited network and CPU bandwidth with other tasks, it is becoming increasingly attractive to use some form of event-based control instead, where precious events are used only when needed. Forms of event-based control have been used in practice for a very long time, but mostly in an ad-hoc way. Though optimal solutions to most event-based control problems are unknown, it should still be viable to compare performance between suggested approaches in a reasonable manner. This thesis investigates an event-based variation on the stochastic linear-quadratic (LQ) control problem, with a fixed cost per control event. The sporadic constraint of an enforced minimum inter-event time is introduced, yielding a mixed continuous-/discrete-time formulation. The quantitative trade-off between event rate and control performance is compared between periodic and sporadic control. Example problems for first-order plants are investigated, for a single control loop and for multiple loops closed over a shared medium. Path constraints are introduced to model and analyze higher-order event-based control systems. This component-based approach to stochastic hybrid systems allows to express continuous- and discrete-time dynamics, state and switching constraints, control laws, and stochastic disturbances in the same model. Sum-of-squares techniques are then used to find bounds on control objectives using convex semidefinite programming. The thesis also considers state estimation for discrete time linear stochastic systems from measurements with convex set uncertainty. The Bayesian observer is considered given log-concave process disturbances and measurement likelihoods. Strong log-concavity is introduced, and it is shown that the observer preserves log-concavity, and propagates strong log-concavity like inverse covariance in a Kalman filter. A recursive state estimator is developed for systems with both stochastic and set-bounded process and measurement noise terms. A time-varying linear filter gain is optimized using convex semidefinite programming and ellipsoidal over-approximation, given a relative weight on the two kinds of error

    Data Assimilation: A Mathematical Introduction

    Get PDF
    This book provides a systematic treatment of the mathematical underpinnings of work in data assimilation, covering both theoretical and computational approaches. Specifically the authors develop a unified mathematical framework in which a Bayesian formulation of the problem provides the bedrock for the derivation, development and analysis of algorithms; the many examples used in the text, together with the algorithms which are introduced and discussed, are all illustrated by the MATLAB software detailed in the book and made freely available online. The book is organized into nine chapters: the first contains a brief introduction to the mathematical tools around which the material is organized; the next four are concerned with discrete time dynamical systems and discrete time data; the last four are concerned with continuous time dynamical systems and continuous time data and are organized analogously to the corresponding discrete time chapters. This book is aimed at mathematical researchers interested in a systematic development of this interdisciplinary field, and at researchers from the geosciences, and a variety of other scientific fields, who use tools from data assimilation to combine data with time-dependent models. The numerous examples and illustrations make understanding of the theoretical underpinnings of data assimilation accessible. Furthermore, the examples, exercises and MATLAB software, make the book suitable for students in applied mathematics, either through a lecture course, or through self-study

    Event-triggered near optimal adaptive control of interconnected systems

    Get PDF
    Increased interest in complex interconnected systems like smart-grid, cyber manufacturing have attracted researchers to develop optimal adaptive control schemes to elicit a desired performance when the complex system dynamics are uncertain. In this dissertation, motivated by the fact that aperiodic event sampling saves network resources while ensuring system stability, a suite of novel event-sampled distributed near-optimal adaptive control schemes are introduced for uncertain linear and affine nonlinear interconnected systems in a forward-in-time and online manner. First, a novel stochastic hybrid Q-learning scheme is proposed to generate optimal adaptive control law and to accelerate the learning process in the presence of random delays and packet losses resulting from the communication network for an uncertain linear interconnected system. Subsequently, a novel online reinforcement learning (RL) approach is proposed to solve the Hamilton-Jacobi-Bellman (HJB) equation by using neural networks (NNs) for generating distributed optimal control of nonlinear interconnected systems using state and output feedback. To relax the state vector measurements, distributed observers are introduced. Next, using RL, an improved NN learning rule is derived to solve the HJB equation for uncertain nonlinear interconnected systems with event-triggered feedback. Distributed NN identifiers are introduced both for approximating the uncertain nonlinear dynamics and to serve as a model for online exploration. Next, the control policy and the event-sampling errors are considered as non-cooperative players and a min-max optimization problem is formulated for linear and affine nonlinear systems by using zero-sum game approach for simultaneous optimization of both the control policy and the event based sampling instants. The net result is the development of optimal adaptive event-triggered control of uncertain dynamic systems --Abstract, page iv
    • …
    corecore