73,686 research outputs found

    A dynamic inverse for nonlinear maps

    Get PDF
    We consider the problem of estimating the time-varying root of a time-dependent nonlinear map. We introduce a "dynamic inverse" of a map, another generally time-dependent map which one composes with the original map to form a nonlinear vector-field. The flow of this vector field decays exponentially to the root. We then show how a dynamic inverse may be determined dynamically while being used simultaneously to find a root. We construct a continuous-time analog computational paradigm around the dynamic inverse

    Gaussian process based model predictive control : a thesis submitted in partial fulfillment of the requirements for the degree of Doctor of Philosophy in Engineering, School of Engineering and Advanced Technology, Massey University, New Zealand

    Get PDF
    The performance of using Model Predictive Control (MPC) techniques is highly dependent on a model that is able to accurately represent the dynamical system. The datadriven modelling techniques are usually used as an alternative approach to obtain such a model when first principle techniques are not applicable. However, it is not easy to assess the quality of learnt models when using the traditional data-driven models, such as Artificial Neural Network (ANN) and Fuzzy Model (FM). This issue is addressed in this thesis by using probabilistic Gaussian Process (GP) models. One key issue of using the GP models is accurately learning the hyperparameters. The Conjugate Gradient (CG) algorithms are conventionally used in the problem of maximizing the Log-Likelihood (LL) function to obtain these hyperparameters. In this thesis, we proposed a hybrid Particle Swarm Optimization (PSO) algorithm to cope with the problem of learning hyperparameters. In addition, we also explored using the Mean Squared Error (MSE) of outputs as the fitness function in the optimization problem. This will provide us a quality indication of intermediate solutions. The GP based MPC approaches for unknown systems have been studied in the past decade. However, most of them are not generally formulated. In addition, the optimization solutions in existing GP based MPC algorithms are not clearly given or are computationally demanding. In this thesis, we first study the use of GP based MPC approaches in the unconstrained problems. Compared to the existing works, the proposed approach is generally formulated and the corresponding optimization problem is eff- ciently solved by using the analytical gradients of GP models w.r.t. outputs and control inputs. The GPMPC1 and GPMPC2 algorithms are subsequently proposed to handle the general constrained problems. In addition, through using the proposed basic and extended GP based local dynamical models, the constrained MPC problem is effectively solved in the GPMPC1 and GPMPC2 algorithms. The proposed algorithms are verified in the trajectory tracking problem of the quadrotor. The issue of closed-loop stability in the proposed GPMPC algorithm is addressed by means of the terminal cost and constraint technique in this thesis. The stability guaranteed GPMPC algorithm is subsequently proposed for the constrained problem. By using the extended GP based local dynamical model, the corresponding MPC problem is effectively solved

    On Robustness Analysis of a Dynamic Average Consensus Algorithm to Communication Delay

    Full text link
    This paper studies the robustness of a dynamic average consensus algorithm to communication delay over strongly connected and weight-balanced (SCWB) digraphs. Under delay-free communication, the algorithm of interest achieves a practical asymptotic tracking of the dynamic average of the time-varying agents' reference signals. For this algorithm, in both its continuous-time and discrete-time implementations, we characterize the admissible communication delay range and study the effect of the delay on the rate of convergence and the tracking error bound. Our study also includes establishing a relationship between the admissible delay bound and the maximum degree of the SCWB digraphs. We also show that for delays in the admissible bound, for static signals the algorithms achieve perfect tracking. Moreover, when the interaction topology is a connected undirected graph, we show that the discrete-time implementation is guaranteed to tolerate at least one step delay. Simulations demonstrate our results

    Distributive Network Utility Maximization (NUM) over Time-Varying Fading Channels

    Full text link
    Distributed network utility maximization (NUM) has received an increasing intensity of interest over the past few years. Distributed solutions (e.g., the primal-dual gradient method) have been intensively investigated under fading channels. As such distributed solutions involve iterative updating and explicit message passing, it is unrealistic to assume that the wireless channel remains unchanged during the iterations. Unfortunately, the behavior of those distributed solutions under time-varying channels is in general unknown. In this paper, we shall investigate the convergence behavior and tracking errors of the iterative primal-dual scaled gradient algorithm (PDSGA) with dynamic scaling matrices (DSC) for solving distributive NUM problems under time-varying fading channels. We shall also study a specific application example, namely the multi-commodity flow control and multi-carrier power allocation problem in multi-hop ad hoc networks. Our analysis shows that the PDSGA converges to a limit region rather than a single point under the finite state Markov chain (FSMC) fading channels. We also show that the order of growth of the tracking errors is given by O(T/N), where T and N are the update interval and the average sojourn time of the FSMC, respectively. Based on this analysis, we derive a low complexity distributive adaptation algorithm for determining the adaptive scaling matrices, which can be implemented distributively at each transmitter. The numerical results show the superior performance of the proposed dynamic scaling matrix algorithm over several baseline schemes, such as the regular primal-dual gradient algorithm

    Human-Machine Collaborative Optimization via Apprenticeship Scheduling

    Full text link
    Coordinating agents to complete a set of tasks with intercoupled temporal and resource constraints is computationally challenging, yet human domain experts can solve these difficult scheduling problems using paradigms learned through years of apprenticeship. A process for manually codifying this domain knowledge within a computational framework is necessary to scale beyond the ``single-expert, single-trainee" apprenticeship model. However, human domain experts often have difficulty describing their decision-making processes, causing the codification of this knowledge to become laborious. We propose a new approach for capturing domain-expert heuristics through a pairwise ranking formulation. Our approach is model-free and does not require enumerating or iterating through a large state space. We empirically demonstrate that this approach accurately learns multifaceted heuristics on a synthetic data set incorporating job-shop scheduling and vehicle routing problems, as well as on two real-world data sets consisting of demonstrations of experts solving a weapon-to-target assignment problem and a hospital resource allocation problem. We also demonstrate that policies learned from human scheduling demonstration via apprenticeship learning can substantially improve the efficiency of a branch-and-bound search for an optimal schedule. We employ this human-machine collaborative optimization technique on a variant of the weapon-to-target assignment problem. We demonstrate that this technique generates solutions substantially superior to those produced by human domain experts at a rate up to 9.5 times faster than an optimization approach and can be applied to optimally solve problems twice as complex as those solved by a human demonstrator.Comment: Portions of this paper were published in the Proceedings of the International Joint Conference on Artificial Intelligence (IJCAI) in 2016 and in the Proceedings of Robotics: Science and Systems (RSS) in 2016. The paper consists of 50 pages with 11 figures and 4 table
    • …
    corecore