5,416 research outputs found

    Stochastic optimization methods for the simultaneous control of parameter-dependent systems

    Full text link
    We address the application of stochastic optimization methods for the simultaneous control of parameter-dependent systems. In particular, we focus on the classical Stochastic Gradient Descent (SGD) approach of Robbins and Monro, and on the recently developed Continuous Stochastic Gradient (CSG) algorithm. We consider the problem of computing simultaneous controls through the minimization of a cost functional defined as the superposition of individual costs for each realization of the system. We compare the performances of these stochastic approaches, in terms of their computational complexity, with those of the more classical Gradient Descent (GD) and Conjugate Gradient (CG) algorithms, and we discuss the advantages and disadvantages of each methodology. In agreement with well-established results in the machine learning context, we show how the SGD and CSG algorithms can significantly reduce the computational burden when treating control problems depending on a large amount of parameters. This is corroborated by numerical experiments

    Sensor-based robot deployment algorithms

    Get PDF
    Abstract — In robot deployment problems, the fundamental issue is to optimize a steady state performance measure that depends on the spatial configuration of a group of robots. For such problems, a classical way of designing high-level feedback motion planners is to implement a gradient descent scheme on a suitably chosen objective function. This can lead to computationally expensive algorithms that may not be adaptive to uncertain dynamic environments. We address these challenges by showing that algorithms for a variety of deployment scenarios in uncertain stochastic environments and with noisy sensor measurements can be designed as stochastic gradient descent algorithms, and their convergence properties analyzed via the theory of stochastic approximations. This approach yields often surprisingly simple algorithms that can accommodate complicated objective functions, and work without a detailed model of the environment. To illustrate the richness of the framework, we discuss two applications, namely source seeking with realistic stochastic wireless connectivity constraints, and coverage with heterogeneous sensors. I

    A Molecular Implementation of the Least Mean Squares Estimator

    Full text link
    In order to function reliably, synthetic molecular circuits require mechanisms that allow them to adapt to environmental disturbances. Least mean squares (LMS) schemes, such as commonly encountered in signal processing and control, provide a powerful means to accomplish that goal. In this paper we show how the traditional LMS algorithm can be implemented at the molecular level using only a few elementary biomolecular reactions. We demonstrate our approach using several simulation studies and discuss its relevance to synthetic biology.Comment: Molecular circuits, synthetic biology, least mean squares estimator, adaptive system

    Distributed Learning for Stochastic Generalized Nash Equilibrium Problems

    Full text link
    This work examines a stochastic formulation of the generalized Nash equilibrium problem (GNEP) where agents are subject to randomness in the environment of unknown statistical distribution. We focus on fully-distributed online learning by agents and employ penalized individual cost functions to deal with coupled constraints. Three stochastic gradient strategies are developed with constant step-sizes. We allow the agents to use heterogeneous step-sizes and show that the penalty solution is able to approach the Nash equilibrium in a stable manner within O(μmax)O(\mu_\text{max}), for small step-size value μmax\mu_\text{max} and sufficiently large penalty parameters. The operation of the algorithm is illustrated by considering the network Cournot competition problem

    SuperSpike: Supervised learning in multi-layer spiking neural networks

    Full text link
    A vast majority of computation in the brain is performed by spiking neural networks. Despite the ubiquity of such spiking, we currently lack an understanding of how biological spiking neural circuits learn and compute in-vivo, as well as how we can instantiate such capabilities in artificial spiking circuits in-silico. Here we revisit the problem of supervised learning in temporally coding multi-layer spiking neural networks. First, by using a surrogate gradient approach, we derive SuperSpike, a nonlinear voltage-based three factor learning rule capable of training multi-layer networks of deterministic integrate-and-fire neurons to perform nonlinear computations on spatiotemporal spike patterns. Second, inspired by recent results on feedback alignment, we compare the performance of our learning rule under different credit assignment strategies for propagating output errors to hidden units. Specifically, we test uniform, symmetric and random feedback, finding that simpler tasks can be solved with any type of feedback, while more complex tasks require symmetric feedback. In summary, our results open the door to obtaining a better scientific understanding of learning and computation in spiking neural networks by advancing our ability to train them to solve nonlinear problems involving transformations between different spatiotemporal spike-time patterns
    • …
    corecore