61,685 research outputs found

    Domain Decomposition for Stochastic Optimal Control

    Full text link
    This work proposes a method for solving linear stochastic optimal control (SOC) problems using sum of squares and semidefinite programming. Previous work had used polynomial optimization to approximate the value function, requiring a high polynomial degree to capture local phenomena. To improve the scalability of the method to problems of interest, a domain decomposition scheme is presented. By using local approximations, lower degree polynomials become sufficient, and both local and global properties of the value function are captured. The domain of the problem is split into a non-overlapping partition, with added constraints ensuring C1C^1 continuity. The Alternating Direction Method of Multipliers (ADMM) is used to optimize over each domain in parallel and ensure convergence on the boundaries of the partitions. This results in improved conditioning of the problem and allows for much larger and more complex problems to be addressed with improved performance.Comment: 8 pages. Accepted to CDC 201

    Hierarchical Linearly-Solvable Markov Decision Problems

    Full text link
    We present a hierarchical reinforcement learning framework that formulates each task in the hierarchy as a special type of Markov decision process for which the Bellman equation is linear and has analytical solution. Problems of this type, called linearly-solvable MDPs (LMDPs) have interesting properties that can be exploited in a hierarchical setting, such as efficient learning of the optimal value function or task compositionality. The proposed hierarchical approach can also be seen as a novel alternative to solving LMDPs with large state spaces. We derive a hierarchical version of the so-called Z-learning algorithm that learns different tasks simultaneously and show empirically that it significantly outperforms the state-of-the-art learning methods in two classical hierarchical reinforcement learning domains: the taxi domain and an autonomous guided vehicle task.Comment: 11 pages, 6 figures, 26th International Conference on Automated Planning and Schedulin

    Price decomposition in large-scale stochastic optimal control

    Get PDF
    We are interested in optimally driving a dynamical system that can be influenced by exogenous noises. This is generally called a Stochastic Optimal Control (SOC) problem and the Dynamic Programming (DP) principle is the natural way of solving it. Unfortunately, DP faces the so-called curse of dimensionality: the complexity of solving DP equations grows exponentially with the dimension of the information variable that is sufficient to take optimal decisions (the state variable). For a large class of SOC problems, which includes important practical problems, we propose an original way of obtaining strategies to drive the system. The algorithm we introduce is based on Lagrangian relaxation, of which the application to decomposition is well-known in the deterministic framework. However, its application to such closed-loop problems is not straightforward and an additional statistical approximation concerning the dual process is needed. We give a convergence proof, that derives directly from classical results concerning duality in optimization, and enlghten the error made by our approximation. Numerical results are also provided, on a large-scale SOC problem. This idea extends the original DADP algorithm that was presented by Barty, Carpentier and Girardeau (2010)

    Scalable Approach to Uncertainty Quantification and Robust Design of Interconnected Dynamical Systems

    Full text link
    Development of robust dynamical systems and networks such as autonomous aircraft systems capable of accomplishing complex missions faces challenges due to the dynamically evolving uncertainties coming from model uncertainties, necessity to operate in a hostile cluttered urban environment, and the distributed and dynamic nature of the communication and computation resources. Model-based robust design is difficult because of the complexity of the hybrid dynamic models including continuous vehicle dynamics, the discrete models of computations and communications, and the size of the problem. We will overview recent advances in methodology and tools to model, analyze, and design robust autonomous aerospace systems operating in uncertain environment, with stress on efficient uncertainty quantification and robust design using the case studies of the mission including model-based target tracking and search, and trajectory planning in uncertain urban environment. To show that the methodology is generally applicable to uncertain dynamical systems, we will also show examples of application of the new methods to efficient uncertainty quantification of energy usage in buildings, and stability assessment of interconnected power networks

    Solving optimal control problems governed by random Navier-Stokes equations using low-rank methods

    Full text link
    Many problems in computational science and engineering are simultaneously characterized by the following challenging issues: uncertainty, nonlinearity, nonstationarity and high dimensionality. Existing numerical techniques for such models would typically require considerable computational and storage resources. This is the case, for instance, for an optimization problem governed by time-dependent Navier-Stokes equations with uncertain inputs. In particular, the stochastic Galerkin finite element method often leads to a prohibitively high dimensional saddle-point system with tensor product structure. In this paper, we approximate the solution by the low-rank Tensor Train decomposition, and present a numerically efficient algorithm to solve the optimality equations directly in the low-rank representation. We show that the solution of the vorticity minimization problem with a distributed control admits a representation with ranks that depend modestly on model and discretization parameters even for high Reynolds numbers. For lower Reynolds numbers this is also the case for a boundary control. This opens the way for a reduced-order modeling of the stochastic optimal flow control with a moderate cost at all stages.Comment: 29 page

    Spectral proper orthogonal decomposition and its relationship to dynamic mode decomposition and resolvent analysis

    Get PDF
    We consider the frequency domain form of proper orthogonal decomposition (POD) called spectral proper orthogonal decomposition (SPOD). Spectral POD is derived from a space-time POD problem for statistically stationary flows and leads to modes that each oscillate at a single frequency. This form of POD goes back to the original work of Lumley (Stochastic tools in turbulence, Academic Press, 1970), but has been overshadowed by a space-only form of POD since the 1990s. We clarify the relationship between these two forms of POD and show that SPOD modes represent structures that evolve coherently in space and time while space-only POD modes in general do not. We also establish a relationship between SPOD and dynamic mode decomposition (DMD); we show that SPOD modes are in fact optimally averaged DMD modes obtained from an ensemble DMD problem for stationary flows. Accordingly, SPOD modes represent structures that are dynamic in the same sense as DMD modes but also optimally account for the statistical variability of turbulent flows. Finally, we establish a connection between SPOD and resolvent analysis. The key observation is that the resolvent-mode expansion coefficients must be regarded as statistical quantities to ensure convergent approximations of the flow statistics. When the expansion coefficients are uncorrelated, we show that SPOD and resolvent modes are identical. Our theoretical results and the overall utility of SPOD are demonstrated using two example problems: the complex Ginzburg-Landau equation and a turbulent jet
    • …
    corecore