120 research outputs found

    Universal Convexification via Risk-Aversion

    Full text link
    We develop a framework for convexifying a fairly general class of optimization problems. Under additional assumptions, we analyze the suboptimality of the solution to the convexified problem relative to the original nonconvex problem and prove additive approximation guarantees. We then develop algorithms based on stochastic gradient methods to solve the resulting optimization problems and show bounds on convergence rates. %We show a simple application of this framework to supervised learning, where one can perform integration explicitly and can use standard (non-stochastic) optimization algorithms with better convergence guarantees. We then extend this framework to apply to a general class of discrete-time dynamical systems. In this context, our convexification approach falls under the well-studied paradigm of risk-sensitive Markov Decision Processes. We derive the first known model-based and model-free policy gradient optimization algorithms with guaranteed convergence to the optimal solution. Finally, we present numerical results validating our formulation in different applications

    Iterative Linear Quadratic Regulator Design for Nonlinear Biological Movement

    Get PDF
    Abstract This paper presents an Iterative Linear Quadratic Regulator (ILQR) method for locally-optimal feedback control of nonlinear dynamical systems. The method is applied to a musculo-skeletal arm model with 10 state dimensions and 6 controls, and is used to compute energy-optimal reaching movements. Numerical comparisons with three existing methods demonstrate that the new method converges substantially faster and finds slightly better solutions

    Vector-Based Integration of Local and Long-Range Information in Visual Cortex

    Get PDF
    Integration of inputs by cortical neurons provides the basis for the complex information processing performed in the cerebral cortex. Here, we propose a new analytic framework for understanding integration within cortical neuronal receptive fields. Based on the synaptic organization of cortex, we argue that neuronal integration is a systems--level process better studied in terms of local cortical circuitry than at the level of single neurons, and we present a method for constructing self-contained modules which capture (nonlinear) local circuit interactions. In this framework, receptive field elements naturally have dual (rather than the traditional unitary influence since they drive both excitatory and inhibitory cortical neurons. This vector-based analysis, in contrast to scalarsapproaches, greatly simplifies integration by permitting linear summation of inputs from both "classical" and "extraclassical" receptive field regions. We illustrate this by explaining two complex visual cortical phenomena, which are incompatible with scalar notions of neuronal integration
    • …
    corecore