2,705 research outputs found

    A survey on gain-scheduled control and filtering for parameter-varying systems

    Get PDF
    Copyright ยฉ 2014 Guoliang Wei et al. This is an open access article distributed under the Creative Commons Attribution License, which permits unrestricted use, distribution, and reproduction in any medium, provided the original work is properly cited.This paper presents an overview of the recent developments in the gain-scheduled control and filtering problems for the parameter-varying systems. First of all, we recall several important algorithms suitable for gain-scheduling method including gain-scheduled proportional-integral derivative (PID) control, H 2, H โˆž and mixed H 2 / H โˆž gain-scheduling methods as well as fuzzy gain-scheduling techniques. Secondly, various important parameter-varying system models are reviewed, for which gain-scheduled control and filtering issues are usually dealt with. In particular, in view of the randomly occurring phenomena with time-varying probability distributions, some results of our recent work based on the probability-dependent gain-scheduling methods are reviewed. Furthermore, some latest progress in this area is discussed. Finally, conclusions are drawn and several potential future research directions are outlined.The National Natural Science Foundation of China under Grants 61074016, 61374039, 61304010, and 61329301; the Natural Science Foundation of Jiangsu Province of China under Grant BK20130766; the Program for Professor of Special Appointment (Eastern Scholar) at Shanghai Institutions of Higher Learning; the Program for New Century Excellent Talents in University under Grant NCET-11-1051, the Leverhulme Trust of the U.K., the Alexander von Humboldt Foundation of Germany

    A non-uniform multi-rate control strategy for a Markov chain-driven Networked Control System

    Full text link
    [EN] In this work, a non-uniform multi-rate control strategy is applied to a kind of Networked Control System (NCS) where a wireless path tracking control for an Unmanned Ground Vehicle (UGV) is carried out. The main aims of the proposed strategy are to face time-varying network-induced delays and to avoid packet disorder. A Markov chain-driven NCS scenario will be considered, where different network load situations, and consequently, different probability density functions for the network delay are assumed. In order to assure mean-square stability for the considered NCS, a decay-rate based sufficient condition is enunciated in terms of probabilistic Linear Matrix Inequalities (LMIs). Simulation results show better control performance, and more accurate path tracking, for the scheduled (delay-dependent) controller than for the non-scheduled one (i.e. the nominal controller when delays appear). Finally, the control strategy is validated on an experimental test-bed.This work was supported in part by Grants TEC2012-31506 from the Spanish Ministry of Education, DPI2011-28507-C02-01 by the Spanish Ministry of Economy, and PAID-00-12 from Technical University of Valencia (Spain). In addition, this research work has been developed as a result of a mobility stay funded by the Erasmus Mundus Programme of the European Commission under the Transatlantic Partnership for Excellence in Engineering (TEE Project).Cuenca Lacruz, รM.; Ojha, U.; Salt Llobregat, JJ.; Chow, M. (2015). A non-uniform multi-rate control strategy for a Markov chain-driven Networked Control System. Information Sciences. 321:31-47. https://doi.org/10.1016/J.INS.2015.05.035S314732

    A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy

    Get PDF
    This study presents an integrated hybrid solution to mandatory lane changing problem to deal with accident avoidance by choosing a safe gap in highway driving. To manage this, a comprehensive treatment to a lane change active safety design is proposed from dynamics, control, and decision making aspects. My effort first goes on driver behaviors and relating human reasoning of threat in driving for modeling a decision making strategy. It consists of two main parts; threat assessment in traffic participants, (TV s) states, and decision making. The first part utilizes an complementary threat assessment of TV s, relative to the subject vehicle, SV , by evaluating the traffic quantities. Then I propose a decision strategy, which is based on Markov decision processes (MDPs) that abstract the traffic environment with a set of actions, transition probabilities, and corresponding utility rewards. Further, the interactions of the TV s are employed to set up a real traffic condition by using game theoretic approach. The question to be addressed here is that how an autonomous vehicle optimally interacts with the surrounding vehicles for a gap selection so that more effective performance of the overall traffic flow can be captured. Finding a safe gap is performed via maximizing an objective function among several candidates. A future prediction engine thus is embedded in the design, which simulates and seeks for a solution such that the objective function is maximized at each time step over a horizon. The combined system therefore forms a predictive fuzzy Markov game (FMG) since it is to perform a predictive interactive driving strategy to avoid accidents for a given traffic environment. I show the effect of interactions in decision making process by proposing both cooperative and non-cooperative Markov game strategies for enhanced traffic safety and mobility. This level is called the higher level controller. I further focus on generating a driver controller to complement the automated carโ€™s safe driving. To compute this, model predictive controller (MPC) is utilized. The success of the combined decision process and trajectory generation is evaluated with a set of different traffic scenarios in dSPACE virtual driving environment. Next, I consider designing an active front steering (AFS) and direct yaw moment control (DYC) as the lower level controller that performs a lane change task with enhanced handling performance in the presence of varying front and rear cornering stiffnesses. I propose a new control scheme that integrates active front steering and the direct yaw moment control to enhance the vehicle handling and stability. I obtain the nonlinear tire forces with Pacejka model, and convert the nonlinear tire stiffnesses to parameter space to design a linear parameter varying controller (LPV) for combined AFS and DYC to perform a commanded lane change task. Further, the nonlinear vehicle lateral dynamics is modeled with Takagi-Sugeno (T-S) framework. A state-feedback fuzzy Hโˆž controller is designed for both stability and tracking reference. Simulation study confirms that the performance of the proposed methods is quite satisfactory

    Engineering evaluations and studies. Volume 3: Exhibit C

    Get PDF
    High rate multiplexes asymmetry and jitter, data-dependent amplitude variations, and transition density are discussed

    Joint University Program for Air Transportation Research, 1991-1992

    Get PDF
    This report summarizes the research conducted during the academic year 1991-1992 under the FAA/NASA sponsored Joint University Program for Air Transportation Research. The year end review was held at Ohio University, Athens, Ohio, June 18-19, 1992. The Joint University Program is a coordinated set of three grants sponsored by the Federal Aviation Administration and NASA Langley Research Center, one each with the Massachusetts Institute of Technology (NGL-22-009-640), Ohio University (NGR-36-009-017), and Princeton University (NGL-31-001-252). Completed works, status reports, and annotated bibliographies are presented for research topics, which include navigation, guidance and control theory and practice, intelligent flight control, flight dynamics, human factors, and air traffic control processes. An overview of the year's activities for each university is also presented

    ๋ชจ๋ธ๊ธฐ๋ฐ˜๊ฐ•ํ™”ํ•™์Šต์„์ด์šฉํ•œ๊ณต์ •์ œ์–ด๋ฐ์ตœ์ ํ™”

    Get PDF
    ํ•™์œ„๋…ผ๋ฌธ(๋ฐ•์‚ฌ)--์„œ์šธ๋Œ€ํ•™๊ต ๋Œ€ํ•™์› :๊ณต๊ณผ๋Œ€ํ•™ ํ™”ํ•™์ƒ๋ฌผ๊ณตํ•™๋ถ€,2020. 2. ์ด์ข…๋ฏผ.์ˆœ์ฐจ์  ์˜์‚ฌ๊ฒฐ์ • ๋ฌธ์ œ๋Š” ๊ณต์ • ์ตœ์ ํ™”์˜ ํ•ต์‹ฌ ๋ถ„์•ผ ์ค‘ ํ•˜๋‚˜์ด๋‹ค. ์ด ๋ฌธ์ œ์˜ ์ˆ˜์น˜์  ํ•ด๋ฒ• ์ค‘ ๊ฐ€์žฅ ๋งŽ์ด ์‚ฌ์šฉ๋˜๋Š” ๊ฒƒ์€ ์ˆœ๋ฐฉํ–ฅ์œผ๋กœ ์ž‘๋™ํ•˜๋Š” ์ง์ ‘๋ฒ• (direct optimization) ๋ฐฉ๋ฒ•์ด์ง€๋งŒ, ๋ช‡๊ฐ€์ง€ ํ•œ๊ณ„์ ์„ ์ง€๋‹ˆ๊ณ  ์žˆ๋‹ค. ์ตœ์ ํ•ด๋Š” open-loop์˜ ํ˜•ํƒœ๋ฅผ ์ง€๋‹ˆ๊ณ  ์žˆ์œผ๋ฉฐ, ๋ถˆํ™•์ •์„ฑ์ด ์กด์žฌํ• ๋•Œ ๋ฐฉ๋ฒ•๋ก ์˜ ์ˆ˜์น˜์  ๋ณต์žก๋„๊ฐ€ ์ฆ๊ฐ€ํ•œ๋‹ค๋Š” ๊ฒƒ์ด๋‹ค. ๋™์  ๊ณ„ํš๋ฒ• (dynamic programming) ์€ ์ด๋Ÿฌํ•œ ํ•œ๊ณ„์ ์„ ๊ทผ์›์ ์œผ๋กœ ํ•ด๊ฒฐํ•  ์ˆ˜ ์žˆ์ง€๋งŒ, ๊ทธ๋™์•ˆ ๊ณต์ • ์ตœ์ ํ™”์— ์ ๊ทน์ ์œผ๋กœ ๊ณ ๋ ค๋˜์ง€ ์•Š์•˜๋˜ ์ด์œ ๋Š” ๋™์  ๊ณ„ํš๋ฒ•์˜ ๊ฒฐ๊ณผ๋กœ ์–ป์–ด์ง„ ํŽธ๋ฏธ๋ถ„ ๋ฐฉ์ •์‹ ๋ฌธ์ œ๊ฐ€ ์œ ํ•œ์ฐจ์› ๋ฒกํ„ฐ๊ณต๊ฐ„์ด ์•„๋‹Œ ๋ฌดํ•œ์ฐจ์›์˜ ํ•จ์ˆ˜๊ณต๊ฐ„์—์„œ ๋‹ค๋ฃจ์–ด์ง€๊ธฐ ๋•Œ๋ฌธ์ด๋‹ค. ์†Œ์œ„ ์ฐจ์›์˜ ์ €์ฃผ๋ผ๊ณ  ๋ถˆ๋ฆฌ๋Š” ์ด ๋ฌธ์ œ๋ฅผ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•œ ํ•œ๊ฐ€์ง€ ๋ฐฉ๋ฒ•์œผ๋กœ์„œ, ์ƒ˜ํ”Œ์„ ์ด์šฉํ•œ ๊ทผ์‚ฌ์  ํ•ด๋ฒ•์— ์ดˆ์ ์„ ๋‘” ๊ฐ•ํ™”ํ•™์Šต ๋ฐฉ๋ฒ•๋ก ์ด ์—ฐ๊ตฌ๋˜์–ด ์™”๋‹ค. ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์—์„œ๋Š” ๊ฐ•ํ™”ํ•™์Šต ๋ฐฉ๋ฒ•๋ก  ์ค‘, ๊ณต์ • ์ตœ์ ํ™”์— ์ ํ•ฉํ•œ ๋ชจ๋ธ ๊ธฐ๋ฐ˜ ๊ฐ•ํ™”ํ•™์Šต์— ๋Œ€ํ•ด ์—ฐ๊ตฌํ•˜๊ณ , ์ด๋ฅผ ๊ณต์ • ์ตœ์ ํ™”์˜ ๋Œ€ํ‘œ์ ์ธ ์„ธ๊ฐ€์ง€ ์ˆœ์ฐจ์  ์˜์‚ฌ๊ฒฐ์ • ๋ฌธ์ œ์ธ ์Šค์ผ€์ค„๋ง, ์ƒ์œ„๋‹จ๊ณ„ ์ตœ์ ํ™”, ํ•˜์œ„๋‹จ๊ณ„ ์ œ์–ด์— ์ ์šฉํ•˜๋Š” ๊ฒƒ์„ ๋ชฉํ‘œ๋กœ ํ•œ๋‹ค. ์ด ๋ฌธ์ œ๋“ค์€ ๊ฐ๊ฐ ๋ถ€๋ถ„๊ด€์ธก ๋งˆ๋ฅด์ฝ”ํ”„ ๊ฒฐ์ • ๊ณผ์ • (partially observable Markov decision process), ์ œ์–ด-์•„ํ•€ ์ƒํƒœ๊ณต๊ฐ„ ๋ชจ๋ธ (control-affine state space model), ์ผ๋ฐ˜์  ์ƒํƒœ๊ณต๊ฐ„ ๋ชจ๋ธ (general state space model)๋กœ ๋ชจ๋ธ๋ง๋œ๋‹ค. ๋˜ํ•œ ๊ฐ ์ˆ˜์น˜์  ๋ชจ๋ธ๋“ค์„ ํ•ด๊ฒฐํ•˜๊ธฐ ์œ„ํ•ด point based value iteration (PBVI), globalized dual heuristic programming (GDHP), and differential dynamic programming (DDP)๋กœ ๋ถˆ๋ฆฌ๋Š” ๋ฐฉ๋ฒ•๋“ค์„ ๋„์ž…ํ•˜์˜€๋‹ค. ์ด ์„ธ๊ฐ€์ง€ ๋ฌธ์ œ์™€ ๋ฐฉ๋ฒ•๋ก ์—์„œ ์ œ์‹œ๋œ ํŠน์ง•๋“ค์„ ๋‹ค์Œ๊ณผ ๊ฐ™์ด ์š”์•ฝํ•  ์ˆ˜ ์žˆ๋‹ค: ์ฒซ๋ฒˆ์งธ๋กœ, ์Šค์ผ€์ค„๋ง ๋ฌธ์ œ์—์„œ closed-loop ํ”ผ๋“œ๋ฐฑ ํ˜•ํƒœ์˜ ํ•ด๋ฅผ ์ œ์‹œํ•  ์ˆ˜ ์žˆ์—ˆ๋‹ค. ์ด๋Š” ๊ธฐ์กด ์ง์ ‘๋ฒ•์—์„œ ์–ป์„ ์ˆ˜ ์—†์—ˆ๋˜ ํ˜•ํƒœ๋กœ์„œ, ๊ฐ•ํ™”ํ•™์Šต์˜ ๊ฐ•์ ์„ ๋ถ€๊ฐํ•  ์ˆ˜ ์žˆ๋Š” ์ธก๋ฉด์ด๋ผ ์ƒ๊ฐํ•  ์ˆ˜ ์žˆ๋‹ค. ๋‘๋ฒˆ์งธ๋กœ ๊ณ ๋ คํ•œ ํ•˜์œ„๋‹จ๊ณ„ ์ œ์–ด ๋ฌธ์ œ์—์„œ, ๋™์  ๊ณ„ํš๋ฒ•์˜ ๋ฌดํ•œ์ฐจ์› ํ•จ์ˆ˜๊ณต๊ฐ„ ์ตœ์ ํ™” ๋ฌธ์ œ๋ฅผ ํ•จ์ˆ˜ ๊ทผ์‚ฌ ๋ฐฉ๋ฒ•์„ ํ†ตํ•ด ์œ ํ•œ์ฐจ์› ๋ฒกํ„ฐ๊ณต๊ฐ„ ์ตœ์ ํ™” ๋ฌธ์ œ๋กœ ์™„ํ™”ํ•  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•์„ ๋„์ž…ํ•˜์˜€๋‹ค. ํŠนํžˆ, ์‹ฌ์ธต ์‹ ๊ฒฝ๋ง์„ ์ด์šฉํ•˜์—ฌ ํ•จ์ˆ˜ ๊ทผ์‚ฌ๋ฅผ ํ•˜์˜€๊ณ , ์ด๋•Œ ๋ฐœ์ƒํ•˜๋Š” ์—ฌ๋Ÿฌ๊ฐ€์ง€ ์žฅ์ ๊ณผ ์ˆ˜๋ ด ํ•ด์„ ๊ฒฐ๊ณผ๋ฅผ ๋ณธ ํ•™์œ„๋…ผ๋ฌธ์— ์‹ค์—ˆ๋‹ค. ๋งˆ์ง€๋ง‰ ๋ฌธ์ œ๋Š” ์ƒ์œ„ ๋‹จ๊ณ„ ๋™์  ์ตœ์ ํ™” ๋ฌธ์ œ์ด๋‹ค. ๋™์  ์ตœ์ ํ™” ๋ฌธ์ œ์—์„œ ๋ฐœ์ƒํ•˜๋Š” ์ œ์•ฝ ์กฐ๊ฑดํ•˜์—์„œ ๊ฐ•ํ™”ํ•™์Šต์„ ์ˆ˜ํ–‰ํ•˜๊ธฐ ์œ„ํ•ด, ์›-์Œ๋Œ€ ๋ฏธ๋ถ„๋™์  ๊ณ„ํš๋ฒ• (primal-dual DDP) ๋ฐฉ๋ฒ•๋ก ์„ ์ƒˆ๋กœ ์ œ์•ˆํ•˜์˜€๋‹ค. ์•ž์„œ ์„ค๋ช…ํ•œ ์„ธ๊ฐ€์ง€ ๋ฌธ์ œ์— ์ ์šฉ๋œ ๋ฐฉ๋ฒ•๋ก ์„ ๊ฒ€์ฆํ•˜๊ณ , ๋™์  ๊ณ„ํš๋ฒ•์ด ์ง์ ‘๋ฒ•์— ๋น„๊ฒฌ๋  ์ˆ˜ ์žˆ๋Š” ๋ฐฉ๋ฒ•๋ก ์ด๋ผ๋Š” ์ฃผ์žฅ์„ ์‹ค์ฆํ•˜๊ธฐ ์œ„ํ•ด ์—ฌ๋Ÿฌ๊ฐ€์ง€ ๊ณต์ • ์˜ˆ์ œ๋ฅผ ์‹ค์—ˆ๋‹ค.Sequential decision making problem is a crucial technology for plant-wide process optimization. While the dominant numerical method is the forward-in-time direct optimization, it is limited to the open-loop solution and has difficulty in considering the uncertainty. Dynamic programming method complements the limitations, nonetheless associated functional optimization suffers from the curse-of-dimensionality. The sample-based approach for approximating the dynamic programming, referred to as reinforcement learning (RL) can resolve the issue and investigated throughout this thesis. The method that accounts for the system model explicitly is in particular interest. The model-based RL is exploited to solve the three representative sequential decision making problems; scheduling, supervisory optimization, and regulatory control. The problems are formulated with partially observable Markov decision process, control-affine state space model, and general state space model, and associated model-based RL algorithms are point based value iteration (PBVI), globalized dual heuristic programming (GDHP), and differential dynamic programming (DDP), respectively. The contribution for each problem can be written as follows: First, for the scheduling problem, we developed the closed-loop feedback scheme which highlights the strength compared to the direct optimization method. In the second case, the regulatory control problem is tackled by the function approximation method which relaxes the functional optimization to the finite dimensional vector space optimization. Deep neural networks (DNNs) is utilized as the approximator, and the advantages as well as the convergence analysis is performed in the thesis. Finally, for the supervisory optimization problem, we developed the novel constraint RL framework that uses the primal-dual DDP method. Various illustrative examples are demonstrated to validate the developed model-based RL algorithms and to support the thesis statement on which the dynamic programming method can be considered as a complementary method for direct optimization method.1. Introduction 1 1.1 Motivation and previous work 1 1.2 Statement of contributions 9 1.3 Outline of the thesis 11 2. Background and preliminaries 13 2.1 Optimization problem formulation and the principle of optimality 13 2.1.1 Markov decision process 15 2.1.2 State space model 19 2.2 Overview of the developed RL algorithms 28 2.2.1 Point based value iteration 28 2.2.2 Globalized dual heuristic programming 29 2.2.3 Differential dynamic programming 32 3. A POMDP framework for integrated scheduling of infrastructure maintenance and inspection 35 3.1 Introduction 35 3.2 POMDP solution algorithm 38 3.2.1 General point based value iteration 38 3.2.2 GapMin algorithm 46 3.2.3 Receding horizon POMDP 49 3.3 Problem formulation for infrastructure scheduling 54 3.3.1 State 56 3.3.2 Maintenance and inspection actions 57 3.3.3 State transition function 61 3.3.4 Cost function 67 3.3.5 Observation set and observation function 68 3.3.6 State augmentation 69 3.4 Illustrative example and simulation result 69 3.4.1 Structural point for the analysis of a high dimensional belief space 72 3.4.2 Infinite horizon policy under the natural deterioration process 72 3.4.3 Receding horizon POMDP 79 3.4.4 Validation of POMDP policy via Monte Carlo simulation 83 4. A model-based deep reinforcement learning method applied to finite-horizon optimal control of nonlinear control-affine system 88 4.1 Introduction 88 4.2 Function approximation and learning with deep neural networks 91 4.2.1 GDHP with a function approximator 91 4.2.2 Stable learning of DNNs 96 4.2.3 Overall algorithm 103 4.3 Results and discussions 107 4.3.1 Example 1: Semi-batch reactor 107 4.3.2 Example 2: Diffusion-Convection-Reaction (DCR) process 120 5. Convergence analysis of the model-based deep reinforcement learning for optimal control of nonlinear control-affine system 126 5.1 Introduction 126 5.2 Convergence proof of globalized dual heuristic programming (GDHP) 128 5.3 Function approximation with deep neural networks 137 5.3.1 Function approximation and gradient descent learning 137 5.3.2 Forward and backward propagations of DNNs 139 5.4 Convergence analysis in the deep neural networks space 141 5.4.1 Lyapunov analysis of the neural network parameter errors 141 5.4.2 Lyapunov analysis of the closed-loop stability 150 5.4.3 Overall Lyapunov function 152 5.5 Simulation results and discussions 157 5.5.1 System description 158 5.5.2 Algorithmic settings 160 5.5.3 Control result 161 6. Primal-dual differential dynamic programming for constrained dynamic optimization of continuous system 170 6.1 Introduction 170 6.2 Primal-dual differential dynamic programming for constrained dynamic optimization 172 6.2.1 Augmented Lagrangian method 172 6.2.2 Primal-dual differential dynamic programming algorithm 175 6.2.3 Overall algorithm 179 6.3 Results and discussions 179 7. Concluding remarks 186 7.1 Summary of the contributions 187 7.2 Future works 189 Bibliography 192Docto

    A Human Driver Model for Autonomous Lane Changing in Highways: Predictive Fuzzy Markov Game Driving Strategy

    Get PDF
    This study presents an integrated hybrid solution to mandatory lane changing problem to deal with accident avoidance by choosing a safe gap in highway driving. To manage this, a comprehensive treatment to a lane change active safety design is proposed from dynamics, control, and decision making aspects. My effort first goes on driver behaviors and relating human reasoning of threat in driving for modeling a decision making strategy. It consists of two main parts; threat assessment in traffic participants, (TV s) states, and decision making. The first part utilizes an complementary threat assessment of TV s, relative to the subject vehicle, SV , by evaluating the traffic quantities. Then I propose a decision strategy, which is based on Markov decision processes (MDPs) that abstract the traffic environment with a set of actions, transition probabilities, and corresponding utility rewards. Further, the interactions of the TV s are employed to set up a real traffic condition by using game theoretic approach. The question to be addressed here is that how an autonomous vehicle optimally interacts with the surrounding vehicles for a gap selection so that more effective performance of the overall traffic flow can be captured. Finding a safe gap is performed via maximizing an objective function among several candidates. A future prediction engine thus is embedded in the design, which simulates and seeks for a solution such that the objective function is maximized at each time step over a horizon. The combined system therefore forms a predictive fuzzy Markov game (FMG) since it is to perform a predictive interactive driving strategy to avoid accidents for a given traffic environment. I show the effect of interactions in decision making process by proposing both cooperative and non-cooperative Markov game strategies for enhanced traffic safety and mobility. This level is called the higher level controller. I further focus on generating a driver controller to complement the automated carโ€™s safe driving. To compute this, model predictive controller (MPC) is utilized. The success of the combined decision process and trajectory generation is evaluated with a set of different traffic scenarios in dSPACE virtual driving environment. Next, I consider designing an active front steering (AFS) and direct yaw moment control (DYC) as the lower level controller that performs a lane change task with enhanced handling performance in the presence of varying front and rear cornering stiffnesses. I propose a new control scheme that integrates active front steering and the direct yaw moment control to enhance the vehicle handling and stability. I obtain the nonlinear tire forces with Pacejka model, and convert the nonlinear tire stiffnesses to parameter space to design a linear parameter varying controller (LPV) for combined AFS and DYC to perform a commanded lane change task. Further, the nonlinear vehicle lateral dynamics is modeled with Takagi-Sugeno (T-S) framework. A state-feedback fuzzy Hโˆž controller is designed for both stability and tracking reference. Simulation study confirms that the performance of the proposed methods is quite satisfactory

    Scalable Control Strategies and a Customizable Swarm Robotic Platform for Boundary Coverage and Collective Transport Tasks

    Get PDF
    abstract: Swarms of low-cost, autonomous robots can potentially be used to collectively perform tasks over large domains and long time scales. The design of decentralized, scalable swarm control strategies will enable the development of robotic systems that can execute such tasks with a high degree of parallelism and redundancy, enabling effective operation even in the presence of unknown environmental factors and individual robot failures. Social insect colonies provide a rich source of inspiration for these types of control approaches, since they can perform complex collective tasks under a range of conditions. To validate swarm robotic control strategies, experimental testbeds with large numbers of robots are required; however, existing low-cost robots are specialized and can lack the necessary sensing, navigation, control, and manipulation capabilities. To address these challenges, this thesis presents a formal approach to designing biologically-inspired swarm control strategies for spatially-confined coverage and payload transport tasks, as well as a novel low-cost, customizable robotic platform for testing swarm control approaches. Stochastic control strategies are developed that provably allocate a swarm of robots around the boundaries of multiple regions of interest or payloads to be transported. These strategies account for spatially-dependent effects on the robots' physical distribution and are largely robust to environmental variations. In addition, a control approach based on reinforcement learning is presented for collective payload towing that accommodates robots with heterogeneous maximum speeds. For both types of collective transport tasks, rigorous approaches are developed to identify and translate observed group retrieval behaviors in Novomessor cockerelli ants to swarm robotic control strategies. These strategies can replicate features of ant transport and inherit its properties of robustness to different environments and to varying team compositions. The approaches incorporate dynamical models of the swarm that are amenable to analysis and control techniques, and therefore provide theoretical guarantees on the system's performance. Implementation of these strategies on robotic swarms offers a way for biologists to test hypotheses about the individual-level mechanisms that drive collective behaviors. Finally, this thesis describes Pheeno, a new swarm robotic platform with a three degree-of-freedom manipulator arm, and describes its use in validating a variety of swarm control strategies.Dissertation/ThesisDoctoral Dissertation Mechanical Engineering 201
    • โ€ฆ
    corecore