35,717 research outputs found

    Functional data analysis in an operator-based mixed-model framework

    Get PDF
    Functional data analysis in a mixed-effects model framework is done using operator calculus. In this approach the functional parameters are treated as serially correlated effects giving an alternative to the penalized likelihood approach, where the functional parameters are treated as fixed effects. Operator approximations for the necessary matrix computations are proposed, and semi-explicit and numerically stable formulae of linear computational complexity are derived for likelihood analysis. The operator approach renders the usage of a functional basis unnecessary and clarifies the role of the boundary conditions.Comment: Published in at http://dx.doi.org/10.3150/11-BEJ389 the Bernoulli (http://isi.cbs.nl/bernoulli/) by the International Statistical Institute/Bernoulli Society (http://isi.cbs.nl/BS/bshome.htm

    Application of velocity-based gain-scheduling to lateral auto-pilot design for an agile missile

    Get PDF
    In this paper a modern gain-scheduling methodology is proposed which exploits recently developed velocity-based techniques to resolve many of the deficiencies of classical gain-scheduling approaches (restriction to near equilibrium operation, to slow rate of variation). This is achieved while maintaining continuity with linear methods and providing an open design framework (any linear synthesis approach may be used) which supports divide and conquer design strategies. The application of velocity-based gain-scheduling techniques is demonstrated in application to a demanding, highly nonlinear, missile control design task. Scheduling on instantaneous incidence (a rapidly varying quantity) is well-known to lead to considerable difficulties with classical gain-scheduling methods. It is shown that the methods proposed here can, however, be used to successfully design an effective and robust gain-scheduled controller

    Iterative Updating of Model Error for Bayesian Inversion

    Get PDF
    In computational inverse problems, it is common that a detailed and accurate forward model is approximated by a computationally less challenging substitute. The model reduction may be necessary to meet constraints in computing time when optimization algorithms are used to find a single estimate, or to speed up Markov chain Monte Carlo (MCMC) calculations in the Bayesian framework. The use of an approximate model introduces a discrepancy, or modeling error, that may have a detrimental effect on the solution of the ill-posed inverse problem, or it may severely distort the estimate of the posterior distribution. In the Bayesian paradigm, the modeling error can be considered as a random variable, and by using an estimate of the probability distribution of the unknown, one may estimate the probability distribution of the modeling error and incorporate it into the inversion. We introduce an algorithm which iterates this idea to update the distribution of the model error, leading to a sequence of posterior distributions that are demonstrated empirically to capture the underlying truth with increasing accuracy. Since the algorithm is not based on rejections, it requires only limited full model evaluations. We show analytically that, in the linear Gaussian case, the algorithm converges geometrically fast with respect to the number of iterations. For more general models, we introduce particle approximations of the iteratively generated sequence of distributions; we also prove that each element of the sequence converges in the large particle limit. We show numerically that, as in the linear case, rapid convergence occurs with respect to the number of iterations. Additionally, we show through computed examples that point estimates obtained from this iterative algorithm are superior to those obtained by neglecting the model error.Comment: 39 pages, 9 figure

    Combined analysis of transient delay characteristics and delay autocorrelation function in the Geo(X)/G/1 queue

    Get PDF
    We perform a discrete-time analysis of customer delay in a buffer with batch arrivals. The delay of the kth customer that enters the FIFO buffer is characterized under the assumption that the numbers of arrivals per slot are independent and identically distributed. By using supplementary variables and generating functions, z-transforms of the transient delays are calculated. Numerical inversion of these transforms lead to results for the moments of the delay of the kth customer. For computational reasons k cannot be too large. Therefore, these numerical inversion results are complemented by explicit analytic expressions for the asymptotics for large k. We further show how the results allow us to characterize jitter-related variables, such as the autocorrelation of the delay in steady state

    A geometrical analysis of global stability in trained feedback networks

    Get PDF
    Recurrent neural networks have been extensively studied in the context of neuroscience and machine learning due to their ability to implement complex computations. While substantial progress in designing effective learning algorithms has been achieved in the last years, a full understanding of trained recurrent networks is still lacking. Specifically, the mechanisms that allow computations to emerge from the underlying recurrent dynamics are largely unknown. Here we focus on a simple, yet underexplored computational setup: a feedback architecture trained to associate a stationary output to a stationary input. As a starting point, we derive an approximate analytical description of global dynamics in trained networks which assumes uncorrelated connectivity weights in the feedback and in the random bulk. The resulting mean-field theory suggests that the task admits several classes of solutions, which imply different stability properties. Different classes are characterized in terms of the geometrical arrangement of the readout with respect to the input vectors, defined in the high-dimensional space spanned by the network population. We find that such approximate theoretical approach can be used to understand how standard training techniques implement the input-output task in finite-size feedback networks. In particular, our simplified description captures the local and the global stability properties of the target solution, and thus predicts training performance
    corecore