9 research outputs found

    준정형화된 환경에서 Look-ahead Point를 이용한 모방학습 기반 자율 내비게이션 방법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(지능형융합시스템전공), 2023. 2. 박재흥.본 학위논문은 자율주행 차량이 주차장에서 위상지도와 비전 센서로 내비게이션을 수행하는 방법들을 제안합니다. 이 환경에서의 자율주행 기술은 완전 자율주행을 완성하는 데 필요하며, 편리하게 이용될 수 있습니다. 이 기술을 구현하기 위해, 경로를 생성하고 이를 현지화 데이터로 추종하는 방법이 일반적으로 연구되고 있습니다. 그러나, 주차장에서는 도로 간 간격이 좁고 장애물이 복잡하게 분포되어 있어 현지화 데이터를 정확하게 얻기 힘듭니다. 이는 실제 경로와 추종하는 경로 사이에 틀어짐을 발생시켜, 차량과 장애물 간 충돌 가능성을 높입니다. 따라서 현지화 데이터로 경로를 추종하는 대신, 낮은 비용을 가지는 비전 센서로 차량이 주행 가능 영역을 향해 주행하는 방법이 제안됩니다. 주차장에는 차선이 없고 다양한 정적/동적 장애물이 복잡하게 있어, 주행 가능/불가능한 영역을 구분하여 점유 격자 지도를 얻는 것이 필요합니다. 또한, 교차로를 내비게이션하기 위해, 전역 계획에 따른 하나의 갈래 도로만이 주행가능 영역으로 구분됩니다. 갈래 도로는 회전된 바운딩 박스 형태로 인식되며 주행가능 영역 인식과 함께 multi-task 네트워크를 통해 얻어집니다. 주행을 위해 모방학습이 사용되며, 이는 모델-기반 모션플래닝 방법보다 파라미터 튜닝 없이도 다양하고 복잡한 환경을 다룰 수 있고 부정확한 인식 결과에도 강인합니다. 아울러, 이미지에서 제어 명령을 구하는 기존 모방학습 방법과 달리, 점유 격자 지도에서 차량이 도달할 look-ahead point를 학습하는 새로운 모방학습 방법이 제안됩니다. 이 point를 사용함으로써, 모방 학습의 성능을 향상시키는 data aggregation (DAgger) 알고리즘을 별도의 조이스틱 없이 자율주행에 적용할 수 있으며, 전문가는 human-in-loop DAgger 훈련 과정에서도 최적의 행동을 잘 선택할 수 있습니다. 추가로, DAgger 변형 알고리즘들은 안전하지 않거나 충돌에 가까운 상황에 대한 데이터를 샘플링하여 DAgger 성능이 향상됩니다. 그러나, 전체 훈련 데이터셋에서 이 상황에 대한 데이터 비율이 적으면, 추가적인 DAgger 수행 및 사람의 노력이 요구됩니다. 이 문제를 다루기 위해, 가중 손실 함수를 사용하는 새로운 DAgger 훈련 방법인 WeightDAgger 알고리즘이 제안되며, 더 적은 DAgger 반복으로 앞서 언급 것과 유사한 상황에서 전문가의 행동을 더 정확하게 모방할 수 있습니다. DAgger를 동적 상황까지 확장하기 위해, 에이전트와 경쟁하는 적대적 정책이 제안되고, 이 정책을 DAgger 알고리즘에 적용하기 위한 훈련 프레임워크가 제안됩니다. 에이전트는 이전 DAgger 훈련 단계에서 훈련되지 않은 다양한 상황에 대해 훈련될 수 있을 뿐만 아니라 쉬운 상황에서 어려운 상황까지 점진적으로 훈련될 수 있습니다. 실내외 주차장에서의 차량 내비게이션 실험을 통해, 모델-기반 모션 플래닝 알고리즘의 한계 및 이를 다룰 수 있는 제안하는 모방학습 방법의 효용성이 분석됩니다. 또한, 시뮬레이션 실험을 통해, 제안된 WeightDAgger가 기존 DAgger 알고리즘들 보다 더 적은 DAgger 수행 및 사람의 노력이 필요함을 보이며, 적대적 정책을 이용한 DAgger 훈련 방법으로 동적 장애물을 안전하게 회피할 수 있음을 보입니다. 추가적으로, 부록에서는 비전 기반 자율 주차 시스템 및 주차 경로를 빠르게 생성할 수 있는 방법이 소개되어, 비전기반 주행 및 주차를 수행하는 자율 발렛 파킹 시스템이 완성됩니다.This thesis proposes methods for performing autonomous navigation with a topological map and a vision sensor in a parking lot. These methods are necessary to complete fully autonomous driving and can be conveniently used by humans. To implement them, a method of generating a path and tracking it with localization data is commonly studied. However, in such environments, the localization data is inaccurate because the distance between roads is narrow, and obstacles are distributed complexly, which increases the possibility of collisions between the vehicle and obstacles. Therefore, instead of tracking the path with the localization data, a method is proposed in which the vehicle drives toward a drivable area obtained by vision having a low-cost. In the parking lot, there are complicated various static/dynamic obstacles and no lanes, so it is necessary to obtain an occupancy grid map by segmenting the drivable/non-drivable areas. To navigating intersections, one branch road according to a global plan is configured as the drivable area. The branch road is detected in a shape of a rotated bounding box and is obtained through a multi-task network that simultaneously recognizes the drivable area. For driving, imitation learning is used, which can handle various and complex environments without parameter tuning and is more robust to handling an inaccurate perception result than model-based motion-planning algorithms. In addition, unlike existing imitation learning methods that obtain control commands from an image, a new imitation learning method is proposed that learns a look-ahead point that a vehicle will reach on an occupancy grid map. By using this point, the data aggregation (DAgger) algorithm that improves the performance of imitation learning can be applied to autonomous navigating without a separate joystick, and the expert can select the optimal action well even in the human-in-loop DAgger training process. Additionally, DAgger variant algorithms improve DAgger's performance by sampling data for unsafe or near-collision situations. However, if the data ratio for these situations in the entire training dataset is small, additional DAgger iteration and human effort are required. To deal with this problem, a new DAgger training method using a weighted loss function (WeightDAgger) is proposed, which can more accurately imitate the expert action in the aforementioned situations with fewer DAgger iterations. To extend DAgger to dynamic situations, an adversarial agent policy competing with the agent is proposed, and a training framework to apply this policy to DAgger is suggested. The agent can be trained for a variety of situations not trained in previous DAgger training steps, as well as progressively trained from easy to difficult situations. Through vehicle navigation experiments in real indoor and outdoor parking lots, limitations of the model-based motion-planning algorithms and the effectiveness of the proposed method to deal with them are analyzed. Besides, it is shown that the proposed WeightDAgger requires less DAgger performance and human effort than the existing DAgger algorithms, and the vehicle can safely avoid dynamic obstacles with the DAgger training framework using the adversarial agent policy. Additionally, the appendix introduces a vision-based autonomous parking system and a method to quickly generate the parking path, completing the vision-based autonomous valet parking system that performs driving as well as parking.1 INTRODUCTION 1 1.1 Autonomous Driving System and Environments 1 1.2 Motivation 4 1.3 Contributions of Thesis 6 1.4 Overview of Thesis 8 2 MULTI-TASK PERCEPTION NETWORK FOR VISION-BASED NAVIGATION 9 2.1 Introduction 9 2.1.1 Related Works 10 2.2 Proposed Method 13 2.2.1 Bird's-Eye-View Image Transform 14 2.2.2 Multi-Task Perception Network 15 2.2.2.1 Drivable Area Segmentation (Occupancy Grid Map (OGM)) 16 2.2.2.2 Rotated Road Bounding Box Detection 18 2.2.3 Intersection Decision 21 2.2.3.1 Road Occupancy Grid Map (OGMroad) 22 2.2.4 Merged Occupancy Grid Map (OGMmer) 23 2.3 Experiment 25 2.3.1 Experimental Setup 25 2.3.1.1 Autonomous Vehicle 25 2.3.1.2 Multi-task Network Setup 27 2.3.1.3 Model-based Branch Road Detection Method 29 2.3.2 Experimental Results 30 2.3.2.1 Quantitative Analysis of Multi-Task Network 30 2.3.2.2 Comparison of Branch Road Detection Method 31 2.4 Conclusion 34 3 DATA AGGREGATION (DAGGER) ALGORITHM WITH LOOK-AHEAD POINT FOR AUTONOMOUS DRIVING IN SEMI-STRUCTURED ENVIRONMENT 35 3.1 Introduction 35 3.2 Related Works & Background 41 3.2.1 DAgger Algorithms for Autonomous Driving 41 3.2.2 Behavior Cloning 42 3.2.3 DAgger Algorithm 43 3.3 Proposed Method 45 3.3.1 DAgger with Look-ahead Point Composition (State & Action) 45 3.3.2 Loss Function 49 3.3.3 Data-sampling Function in DAgger 50 3.3.4 Reasons to Use Look-ahead Point As Action 52 3.4 Experimental Setup 54 3.4.1 Driving Policy Network Training 54 3.4.2 Model-based Motion-Planning Algorithms 56 3.5 Experimental Result 57 3.5.1 Quantitative Analysis of Driving Policy 58 3.5.1.1 Collision Rate 58 3.5.1.2 Safe Distance Range Ratio 59 3.5.2 Qualitative Analysis of Driving Policy 60 3.5.2.1 Limitations of Tentacle Algorithm 60 3.5.2.2 Limitations of VVF Algorithm 61 3.5.2.3 Limitations of Both Tentacle and VVF 62 3.5.2.4 Driving Results on Noisy Occupancy Grid Map 63 3.5.2.5 Intersection Navigation 65 3.6 Conclusion 68 4 WEIGHT DAGGER ALGORITHM FOR REDUCING IMITATION LEARNING ITERATIONS 70 4.1 Introduction 70 4.2 Related Works & Background 71 4.3 Proposed Method 74 4.3.1 Weighted Loss Function in WeightDAgger 75 4.3.2 Weight Update Process in Entire Training Dataset 78 4.4 Experiments 80 4.4.1 Experimental Setup 80 4.4.2 Experimental Results 82 4.4.2.1 Ablation Study According to τ 82 4.4.2.2 Ablation Study According to ε 83 4.4.2.3 Ablation Study According to α 84 4.4.2.4 Driving Test Results 85 4.4.3 Walking Robot Experiments 86 4.5 Conclusion 87 5 DAGGER USING ADVERSARIAL AGENT POLICY FOR DYNAMIC SITUATIONS 89 5.1 Introduction 89 5.2 Related Works & Background 91 5.2.1 Motion-planning Algorithms for Dynamic Situations 91 5.2.2 DAgger Algorithm for Dynamic Situation 93 5.3 Proposed Method 95 5.3.1 DAgger Training Framework Using Adversarial Agent Policy 95 5.3.2 Applying to Oncoming Dynamic Obstacle Avoidance Task 97 5.3.2.1 Ego Agent Policy 98 5.3.2.2 Adversarial Agent Policy 100 5.4 Experiments 101 5.4.1 Experimental Setup 101 5.4.1.1 Ego Agent Policy Training 102 5.4.1.2 Adversarial Agent Policy Training 103 5.4.2 Experimental Result 103 5.4.2.1 Performance of Adversarial Agent Policy 103 5.4.2.2 Ego Agent Policy Performance Comparisons Trained with / without Adversarial Agent Policy 104 5.5 Conclusion 106 6 CONCLUSIONS 107 Appendix A 110 A.1 Vision-based Re-plannable Autonomous Parking System 110 A.1.1 Parking Spot Detection 112 A.1.2 Re-planning Method 113 A.2 Biased Target-tree* with RRT* Algorithm for Fast Parking Path Planning 115 A.2.1 Introduction 115 A.2.2 Proposed Method 117 A.2.3 Experiments 119 Abstract (In Korean) 143 Acknowledgement 145박

    Validation of robotic navigation strategies in unstructured environments: from autonomous to reactive

    Get PDF
    The main topic of this master thesis is the validation of a navigation algorithm designed to perform autonomously in unstructured environments. Computer simulations and experimental tests with a mobile robot have allowed reaching the established objective. The presented approach is effective, consistent, and able to attain safe navigation with static and dynamic configurations. This work contains a survey of the principal navigation strategies and components. Afterwards, a recap of the history of robotics is briefly illustrated, emphasizing the description of mobile robotics and locomotion. Subsequently, it presents the development of an algorithm for autonomous navigation through an unknown environment for mobile robots. The algorithm seeks to compute trajectories that lead to a target unknown position without falling into a recurrent loop. The code has been entirely written and tested in MATLAB, using randomly generated obstacles of different sizes. The developed algorithm is used as a benchmark to analyze different predictive strategies for the navigation of mobile robots in the presence of environments not known a priori and overpopulated with obstacles. Then, an innovative algorithm for navigation, called NAPVIG, is described and analyzed. The algorithm has been built using ROS and tested in Gazebo real-time simulator. In order to achieve high performances, optimal parameters have been found tuning and simulating the algorithm in different environmental configurations. Finally, an experimental campaign in the SPARCS laboratory of the University of Padua enabled the validation of the chosen parameters

    Vision-based methods for state estimation and control of robotic systems with application to mobile and surgical robots

    Get PDF
    For autonomous systems that need to perceive the surrounding environment for the accomplishment of a given task, vision is a highly informative exteroceptive sensory source. When gathering information from the available sensors, in fact, the richness of visual data allows to provide a complete description of the environment, collecting geometrical and semantic information (e.g., object pose, distances, shapes, colors, lights). The huge amount of collected data allows to consider both methods exploiting the totality of the data (dense approaches), or a reduced set obtained from feature extraction procedures (sparse approaches). This manuscript presents dense and sparse vision-based methods for control and sensing of robotic systems. First, a safe navigation scheme for mobile robots, moving in unknown environments populated by obstacles, is presented. For this task, dense visual information is used to perceive the environment (i.e., detect ground plane and obstacles) and, in combination with other sensory sources, provide an estimation of the robot motion with a linear observer. On the other hand, sparse visual data are extrapolated in terms of geometric primitives, in order to implement a visual servoing control scheme satisfying proper navigation behaviours. This controller relies on visual estimated information and is designed in order to guarantee safety during navigation. In addition, redundant structures are taken into account to re-arrange the internal configuration of the robot and reduce its encumbrance when the workspace is highly cluttered. Vision-based estimation methods are relevant also in other contexts. In the field of surgical robotics, having reliable data about unmeasurable quantities is of great importance and critical at the same time. In this manuscript, we present a Kalman-based observer to estimate the 3D pose of a suturing needle held by a surgical manipulator for robot-assisted suturing. The method exploits images acquired by the endoscope of the robot platform to extrapolate relevant geometrical information and get projected measurements of the tool pose. This method has also been validated with a novel simulator designed for the da Vinci robotic platform, with the purpose to ease interfacing and employment in ideal conditions for testing and validation. The Kalman-based observers mentioned above are classical passive estimators, whose system inputs used to produce the proper estimation are theoretically arbitrary. This does not provide any possibility to actively adapt input trajectories in order to optimize specific requirements on the performance of the estimation. For this purpose, active estimation paradigm is introduced and some related strategies are presented. More specifically, a novel active sensing algorithm employing visual dense information is described for a typical Structure-from-Motion (SfM) problem. The algorithm generates an optimal estimation of a scene observed by a moving camera, while minimizing the maximum uncertainty of the estimation. This approach can be applied to any robotic platforms and has been validated with a manipulator arm equipped with a monocular camera

    Sistemas de suporte à condução autónoma adequados a plataforma robótica 4-wheel skid-steer: percepção, movimento e simulação

    Get PDF
    As competições de robótica móvel desempenham papel preponderante na difusão da ciência e da engenharia ao público em geral. E também um espaço dedicado ao ensaio e comparação de diferentes estratégias e abordagens aos diversos desafios da robótica móvel. Uma das vertentes que tem reunido maior interesse nos promotores deste género de iniciativas e entre o público em geral são as competições de condução autónoma. Tipicamente as Competi¸c˜oes de Condução Autónoma (CCA) tentam reproduzir um ambiente semelhante a uma estrutura rodoviária tradicional, no qual sistemas autónomos deverão dar resposta a um conjunto variado de desafios que vão desde a deteção da faixa de rodagem `a interação com distintos elementos que compõem uma estrutura rodoviária típica, do planeamento trajetórias à localização. O objectivo desta dissertação de mestrado visa documentar o processo de desenho e concepção de uma plataforma robótica móvel do tipo 4-wheel skid-steer para realização de tarefas de condução autónoma em ambiente estruturado numa pista que pretende replicar uma via de circulação automóvel dotada de sinalética básica e alguns obstáculos. Paralelamente, a dissertação pretende também fazer uma análise qualitativa entre o processo de simulação e a sua transposição para uma plataforma robótica física. inferir sobre a diferenças de performance e de comportamento.Mobile robotics competitions play an important role in the diffusion of science and engineering to the general public. It is also a space dedicated to test and compare different strategies and approaches to several challenges of mobile robotics. One of the aspects that has attracted more the interest of promoters for this kind of initiatives and general public is the autonomous driving competitions. Typically, Autonomous Driving Competitions (CCAs) attempt to replicate an environment similar to a traditional road structure, in which autonomous systems should respond to a wide variety of challenges ranging from lane detection to interaction with distinct elements that exist in a typical road structure, from planning trajectories to location. The aim of this master’s thesis is to document the process of designing and endow a 4-wheel skid-steer mobile robotic platform to carry out autonomous driving tasks in a structured environment on a track that intends to replicate a motorized roadway including signs and obstacles. In parallel, the dissertation also intends to make a qualitative analysis between the simulation process and the transposition of the developed algorithm to a physical robotic platform, analysing the differences in performance and behavior

    Vision based navigation in a dynamic environment

    Get PDF
    Cette thèse s'intéresse au problème de la navigation autonome au long cours de robots mobiles à roues dans des environnements dynamiques. Elle s'inscrit dans le cadre du projet FUI Air-Cobot. Ce projet, porté par Akka Technologies, a vu collaborer plusieurs entreprises (Akka, Airbus, 2MORROW, Sterela) ainsi que deux laboratoires de recherche, le LAAS et Mines Albi. L'objectif est de développer un robot collaboratif (ou cobot) capable de réaliser l'inspection d'un avion avant le décollage ou en hangar. Différents aspects ont donc été abordés : le contrôle non destructif, la stratégie de navigation, le développement du système robotisé et de son instrumentation, etc. Cette thèse répond au second problème évoqué, celui de la navigation. L'environnement considéré étant aéroportuaire, il est hautement structuré et répond à des normes de déplacement très strictes (zones interdites, etc.). Il peut être encombré d'obstacles statiques (attendus ou non) et dynamiques (véhicules divers, piétons, ...) qu'il conviendra d'éviter pour garantir la sécurité des biens et des personnes. Cette thèse présente deux contributions. La première porte sur la synthèse d'un asservissement visuel permettant au robot de se déplacer sur de longues distances (autour de l'avion ou en hangar) grâce à une carte topologique et au choix de cibles dédiées. De plus, cet asservissement visuel exploite les informations fournies par toutes les caméras embarquées. La seconde contribution porte sur la sécurité et l'évitement d'obstacles. Une loi de commande basée sur les spirales équiangulaires exploite seulement les données sensorielles fournies par les lasers embarqués. Elle est donc purement référencée capteur et permet de contourner tout obstacle, qu'il soit fixe ou mobile. Il s'agit donc d'une solution générale permettant de garantir la non collision. Enfin, des résultats expérimentaux, réalisés au LAAS et sur le site d'Airbus à Blagnac, montrent l'efficacité de la stratégie développée.This thesis is directed towards the autonomous long range navigation of wheeled robots in dynamic environments. It takes place within the Air-Cobot project. This project aims at designing a collaborative robot (cobot) able to perform the preflight inspection of an aircraft. The considered environment is then highly structured (airport runway and hangars) and may be cluttered with both static and dynamic unknown obstacles (luggage or refueling trucks, pedestrians, etc.). Our navigation framework relies on previous works and is based on the switching between different control laws (go to goal controller, visual servoing, obstacle avoidance) depending on the context. Our contribution is twofold. First of all, we have designed a visual servoing controller able to make the robot move over a long distance thanks to a topological map and to the choice of suitable targets. In addition, multi-camera visual servoing control laws have been built to benefit from the image data provided by the different cameras which are embedded on the Air-Cobot system. The second contribution is related to obstacle avoidance. A control law based on equiangular spirals has been designed to guarantee non collision. This control law, based on equiangular spirals, is fully sensor-based, and allows to avoid static and dynamic obstacles alike. It then provides a general solution to deal efficiently with the collision problem. Experimental results, performed both in LAAS and in Airbus hangars and runways, show the efficiency of the developed techniques

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    Advanced Mobile Robotics: Volume 3

    Get PDF
    Mobile robotics is a challenging field with great potential. It covers disciplines including electrical engineering, mechanical engineering, computer science, cognitive science, and social science. It is essential to the design of automated robots, in combination with artificial intelligence, vision, and sensor technologies. Mobile robots are widely used for surveillance, guidance, transportation and entertainment tasks, as well as medical applications. This Special Issue intends to concentrate on recent developments concerning mobile robots and the research surrounding them to enhance studies on the fundamental problems observed in the robots. Various multidisciplinary approaches and integrative contributions including navigation, learning and adaptation, networked system, biologically inspired robots and cognitive methods are welcome contributions to this Special Issue, both from a research and an application perspective
    corecore