1,738 research outputs found

    μƒλŒ€μ  μ•ˆμ „λΉ„ν–‰μ˜μ—­κ³Ό μƒλŒ€μ  λ²ˆμŠ€νƒ€μΈ 닀항식을 μ΄μš©ν•œ λ‹€μˆ˜ μΏΌλ“œλ‘œν„°μ˜ 경둜 κ³„νš

    Get PDF
    ν•™μœ„λ…Όλ¬Έ (석사) -- μ„œμšΈλŒ€ν•™κ΅ λŒ€ν•™μ› : κ³΅κ³ΌλŒ€ν•™ 기계항곡곡학뢀, 2020. 8. κΉ€ν˜„μ§„.Multi-agent systems consisting of unmanned aerial vehicles (UAVs) are receiving attention from many industrial domains due to their mobility, and applicability. To safely operate these multiagent systems, path planning algorithm that can generate safe, dynamically feasible trajectory is required. However, existing multi-agent trajectory planning methods may fail to generate multiagent trajectory in obstacle-dense environment due to deadlock or optimization failure caused by infeasible collision constraints. In this paper, we presents a new e client algorithm which guarantees a solution for a class of multi-agent trajectory planning problems in obstacle-dense environments. Our algorithm combines the advantages of both grid-based and optimization-based approaches, and generates safe, dynamically feasible trajectories without su ering from an erroneous optimization setup such as imposing infeasible collision constraints. We adopt a sequential optimization method with dummy agents to improve the scalability of the algorithm, and utilize the convex hull property of Bernstein polynomial to replace non-convex collision avoidance constraints to convex ones. We validate the proposed algorithm through the comparison with our previous work and SCP-based method. The proposed method reduces more than 50% of the objective cost compared to our previous work, and reduces more than 75% of the computation time compared to SCP-based method. Furthermore, the proposed method can compute the trajectory for 64 agents on average 6.36 seconds with Intel Core i7-7700 @ 3.60GHz CPU and 16G RAM.무인비행체(UAV)둜 κ΅¬μ„±λœ 닀쀑 μ—μ΄μ „νŠΈ μ‹œμŠ€ν…œμ€ 높은 기동성 및 μ‘μš© κ°€λŠ₯μ„±μœΌλ‘œ λ§Žμ€ μ‚°μ—… λΆ„μ•Όμ—μ„œ 관심을 λ°›κ³  μžˆλ‹€. μ΄λŸ¬ν•œ 닀쀑 μ—μ΄μ „νŠΈ μ‹œμŠ€ν…œμ„ μ•ˆμ „ν•˜κ²Œ μš΄μš©ν•˜λ €λ©΄ μ•ˆμ „ν•˜κ³  λ™μ μœΌλ‘œ μ‹€ν˜„ κ°€λŠ₯ 경둜λ₯Ό 생성할 수 μžˆλŠ” 경둜 κ³„νš μ•Œκ³ λ¦¬μ¦˜μ΄ ν•„μš”ν•˜λ‹€. κ·ΈλŸ¬λ‚˜ 기쑴의 닀쀑 μ—μ΄μ „νŠΈ 경둜 κ³„νš 방법은 μž₯μ• λ¬Ό ν™˜κ²½μ—μ„œ ꡐ착 μƒνƒœλ‚˜ λΆ€μ μ ˆν•œ 좩돌 νšŒν”Ό 쑰건으둜 μΈν•œ μ΅œμ ν™” μ‹€νŒ¨κ°€ 일어날 수 μžˆλ‹€λŠ” ν•œκ³„κ°€ μžˆλ‹€. λ³Έ λ…Όλ¬Έμ—μ„œλŠ” μž₯μ• λ¬Ό ν™˜κ²½μ—μ„œ ν•΄μ˜ 쑴재λ₯Ό 보μž₯ν•˜λ„λ‘ 닀쀑 μ—μ΄μ „νŠΈ 경둜 κ³„νš 문제λ₯Ό λ³€ν™˜ν•œ λ’€ 이λ₯Ό 효율적으둜 ν’€μ–΄λ‚Ό 수 μžˆλŠ” μƒˆλ‘œμš΄ 경둜 κ³„νš μ•Œκ³ λ¦¬μ¦˜μ„ μ œμ‹œν•œλ‹€. 이 μ•Œκ³ λ¦¬μ¦˜μ€ κ·Έλ¦¬λ“œ 기반 접근법과 μ΅œμ ν™” 기반 μ ‘κ·Όλ²•μ˜ μž₯점을 λͺ¨λ‘ 가지도둝 μ„€κ³„λ˜μ—ˆμœΌλ©°, λΆˆκ°€λŠ₯ν•œ 좩돌 ꡬ속쑰건을 λΆ€κ³Όν•˜μ§€ μ•Šκ³  μ•ˆμ „ν•˜κ³  λ™μ μœΌλ‘œ μ‹€ν˜„ κ°€λŠ₯ν•œ ꢀ적을 생성할 수 μžˆλ‹€. 이 μ•Œκ³ λ¦¬μ¦˜μ€ 더미 μ—μ΄μ „νŠΈ(dummy agents)을 μ΄μš©ν•œ 순차 μ΅œμ ν™” 방법을 μ‚¬μš©ν•˜μ—¬ μ•Œκ³ λ¦¬μ¦˜μ˜ ν™•μž₯μ„±(scalability)을 λ†’μ˜€μœΌλ©°, λ²ˆμŠ€νƒ€μΈ(Bernstein) λ‹€ν•­μ‹μ˜ 볼둝 껍질(convex hull) μ„±μ§ˆμ„ ν™œμš©ν•˜μ—¬ λ³Όλ‘ν•˜μ§€ μ•Šμ€ 좩돌 νšŒν”Ό μ œμ•½ 쑰건을 λ³Όλ‘ν™”ν•˜μ˜€λ‹€. μ œμ•ˆλœ μ•Œκ³ λ¦¬μ¦˜μ˜ μ„±λŠ₯은 μ„ ν–‰ 연ꡬ와 SCP 기반 λ°©λ²•κ³Όμ˜ 비ꡐλ₯Ό 톡해 κ²€μ¦λ˜μ—ˆλ‹€. μ œμ•ˆλœ 방법은 μ„ ν–‰ 연ꡬ에 λΉ„ν•΄ λͺ©ν‘œ λΉ„μš©μ˜ 50% 이상 μ ˆκ°ν•˜μ˜€μœΌλ©°, SCP 기반 방법에 λΉ„ν•΄ 계산 μ‹œκ°„μ˜ 75% 이상 κ°μ†Œν•˜μ˜€λ‹€. λ˜ν•œ μ œμ•ˆλœ 방법은 인텔 μ½”μ–΄ i7-7700 @ 3.60GHz CPU 및 16G RAM ν™˜κ²½μ—μ„œ 64개 μ—μ΄μ „νŠΈμ˜ ꢀ적을 κ³„μ‚°ν•˜λŠ”λ° 평균 6.36μ΄ˆκ°€ μ†Œμš”λœλ‹€.1 Introduction 1 1.1 Literature review 2 1.2 Thesis contribution 3 1.3 Thesis outline 3 2 Bernstein polynomial 4 2.1 Definition 4 2.2 Properties 5 2.2.1 Convex hull property 5 2.2.2 Endpoint interpolation property 5 2.2.3 Arithmetic operations and derivatives 6 3 Multi-agent trajectory optimization 7 3.1 Problem formulation 7 3.1.1 Assumption 7 3.1.2 Trajectory Representation 8 3.1.3 Objective function 9 3.1.4 Convex constraints 9 3.1.5 Non-convex collision avoidance constraints 10 3.2 Collision constraints construction 11 3.2.1 Initial trajectory planning 12 3.2.2 Safe flight corridor 14 3.2.3 Relative safe flight corridor 16 3.3 Trajectory optimization 18 4 Sequential optimization with dummy agents 20 5 Experimental results 24 5.1 Comparison with the previous work 24 5.1.1 Success rate 25 5.1.2 Solution quality 26 5.1.3 Scalability analysis 26 5.2 Comparison with SCP-based method 27 5.3 Flight test 29 6 Conclusion 31Maste

    Reinforcement Learning and Planning for Preference Balancing Tasks

    Get PDF
    Robots are often highly non-linear dynamical systems with many degrees of freedom, making solving motion problems computationally challenging. One solution has been reinforcement learning (RL), which learns through experimentation to automatically perform the near-optimal motions that complete a task. However, high-dimensional problems and task formulation often prove challenging for RL. We address these problems with PrEference Appraisal Reinforcement Learning (PEARL), which solves Preference Balancing Tasks (PBTs). PBTs define a problem as a set of preferences that the system must balance to achieve a goal. The method is appropriate for acceleration-controlled systems with continuous state-space and either discrete or continuous action spaces with unknown system dynamics. We show that PEARL learns a sub-optimal policy on a subset of states and actions, and transfers the policy to the expanded domain to produce a more refined plan on a class of robotic problems. We establish convergence to task goal conditions, and even when preconditions are not verifiable, show that this is a valuable method to use before other more expensive approaches. Evaluation is done on several robotic problems, such as Aerial Cargo Delivery, Multi-Agent Pursuit, Rendezvous, and Inverted Flying Pendulum both in simulation and experimentally. Additionally, PEARL is leveraged outside of robotics as an array sorting agent. The results demonstrate high accuracy and fast learning times on a large set of practical applications

    Optimal Guidance and Control with Nonlinear Dynamics Using Sequential Convex Programming

    Get PDF
    This paper presents a novel method for expanding the use of sequential convex programming (SCP) to the domain of optimal guidance and control problems with nonlinear dynamics constraints. SCP is a useful tool in obtaining real-time solutions to direct optimal control, but it is unable to adequately model nonlinear dynamics due to the linearization and discretization required. As nonlinear program solvers are not yet functioning in real-time, a tool is needed to bridge the gap between satisfying the nonlinear dynamics and completing execution fast enough to be useful. Two methods are proposed, sequential convex programming with nonlinear dynamics correction (SCPn) and modified SCPn (M-SCPn), which mixes SCP and SCPn to reduce runtime and improve algorithmic robustness. Both methods are proven to generate optimal state and control trajectories that satisfy the nonlinear dynamics. Simulations are presented to validate the efficacy of the methods as compared to SCP

    UAVs for Enhanced Communication and Computation

    Get PDF

    Proximal operators for multi-agent path planning

    Full text link
    We address the problem of planning collision-free paths for multiple agents using optimization methods known as proximal algorithms. Recently this approach was explored in Bento et al. 2013, which demonstrated its ease of parallelization and decentralization, the speed with which the algorithms generate good quality solutions, and its ability to incorporate different proximal operators, each ensuring that paths satisfy a desired property. Unfortunately, the operators derived only apply to paths in 2D and require that any intermediate waypoints we might want agents to follow be preassigned to specific agents, limiting their range of applicability. In this paper we resolve these limitations. We introduce new operators to deal with agents moving in arbitrary dimensions that are faster to compute than their 2D predecessors and we introduce landmarks, space-time positions that are automatically assigned to the set of agents under different optimality criteria. Finally, we report the performance of the new operators in several numerical experiments.Comment: See movie at http://youtu.be/gRnsjd_ocx
    • …
    corecore