289 research outputs found

    Mobile Robots Navigation

    Get PDF
    Mobile robots navigation includes different interrelated activities: (i) perception, as obtaining and interpreting sensory information; (ii) exploration, as the strategy that guides the robot to select the next direction to go; (iii) mapping, involving the construction of a spatial representation by using the sensory information perceived; (iv) localization, as the strategy to estimate the robot position within the spatial map; (v) path planning, as the strategy to find a path towards a goal location being optimal or not; and (vi) path execution, where motor actions are determined and adapted to environmental changes. The book addresses those activities by integrating results from the research work of several authors all over the world. Research cases are documented in 32 chapters organized within 7 categories next described

    Secure indoor navigation and operation of mobile robots

    Get PDF
    In future work environments, robots will navigate and work side by side to humans. This raises big challenges related to the safety of these robots. In this Dissertation, three tasks have been realized: 1) implementing a localization and navigation system based on StarGazer sensor and Kalman filter; 2) realizing a human-robot interaction system using Kinect sensor and BPNN and SVM models to define the gestures and 3) a new collision avoidance system is realized. The system works on generating the collision-free paths based on the interaction between the human and the robot.In zukünftigen Arbeitsumgebungen werden Roboter navigieren nebeneinander an Menschen. Das wirft Herausforderungen im Zusammenhang mit der Sicherheit dieser Roboter auf. In dieser Dissertation drei Aufgaben realisiert: 1. Implementierung eines Lokalisierungs und Navigationssystem basierend auf Kalman Filter: 2. Realisierung eines Mensch-Roboter-Interaktionssystem mit Kinect und AI zur Definition der Gesten und 3. ein neues Kollisionsvermeidungssystem wird realisiert. Das System arbeitet an der Erzeugung der kollisionsfreien Pfade, die auf der Wechselwirkung zwischen dem Menschen und dem Roboter basieren

    Adaptive and learning-based formation control of swarm robots

    Get PDF
    Autonomous aerial and wheeled mobile robots play a major role in tasks such as search and rescue, transportation, monitoring, and inspection. However, these operations are faced with a few open challenges including robust autonomy, and adaptive coordination based on the environment and operating conditions, particularly in swarm robots with limited communication and perception capabilities. Furthermore, the computational complexity increases exponentially with the number of robots in the swarm. This thesis examines two different aspects of the formation control problem. On the one hand, we investigate how formation could be performed by swarm robots with limited communication and perception (e.g., Crazyflie nano quadrotor). On the other hand, we explore human-swarm interaction (HSI) and different shared-control mechanisms between human and swarm robots (e.g., BristleBot) for artistic creation. In particular, we combine bio-inspired (i.e., flocking, foraging) techniques with learning-based control strategies (using artificial neural networks) for adaptive control of multi- robots. We first review how learning-based control and networked dynamical systems can be used to assign distributed and decentralized policies to individual robots such that the desired formation emerges from their collective behavior. We proceed by presenting a novel flocking control for UAV swarm using deep reinforcement learning. We formulate the flocking formation problem as a partially observable Markov decision process (POMDP), and consider a leader-follower configuration, where consensus among all UAVs is used to train a shared control policy, and each UAV performs actions based on the local information it collects. In addition, to avoid collision among UAVs and guarantee flocking and navigation, a reward function is added with the global flocking maintenance, mutual reward, and a collision penalty. We adapt deep deterministic policy gradient (DDPG) with centralized training and decentralized execution to obtain the flocking control policy using actor-critic networks and a global state space matrix. In the context of swarm robotics in arts, we investigate how the formation paradigm can serve as an interaction modality for artists to aesthetically utilize swarms. In particular, we explore particle swarm optimization (PSO) and random walk to control the communication between a team of robots with swarming behavior for musical creation

    Exploiting Heterogeneity in Networks of Aerial and Ground Robotic Agents

    Get PDF
    By taking advantage of complementary communication technologies, distinct sensing functionalities and varied motion dynamics present in a heterogeneous multi-robotic network, it is possible to accomplish a main mission objective by assigning specialized sub-tasks to specific members of a robotic team. An adequate selection of the team members and an effective coordination are some of the challenges to fully exploit the unique capabilities that these types of systems can offer. Motivated by real world applications, we focus on a multi-robotic network consisting off aerial and ground agents which has the potential to provide critical support to humans in complex settings. For instance, aerial robotic relays are capable of transporting small ground mobile sensors to expand the communication range and the situational awareness of first responders in hazardous environments. In the first part of this dissertation, we extend work on manipulation of cable-suspended loads using aerial robots by solving the problem of lifting the cable-suspended load from the ground before proceeding to transport it. Since the suspended load-quadrotor system experiences switching conditions during this critical maneuver, we define a hybrid system and show that it is differentially-flat. This property facilitates the design of a nonlinear controller which tracks a waypoint-based trajectory associated with the discrete states of the hybrid system. In addition, we address the case of unknown payload mass by combining a least-squares estimation method with the designed controller. Second, we focus on the coordination of a heterogeneous team formed by a group of ground mobile sensors and a flying communication router which is deployed to sense areas of interest in a cluttered environment. Using potential field methods, we propose a controller for the coordinated mobility of the team to guarantee inter-robot and obstacle collision avoidance as well as connectivity maintenance among the ground agents while the main goal of sensing is carried out. For the case of the aerial communications relays, we combine antenna diversity with reinforcement learning to dynamically re-locate these relays so that the received signal strength is maintained above a desired threshold. Motivated by the recent interest of combining radio frequency and optical wireless communications, we envision the implementation of an optical link between micro-scale aerial and ground robots. This type of link requires maintaining a sufficient relative transmitter-receiver position for reliable communications. In the third part of this thesis, we tackle this problem. Based on the link model, we define a connectivity cone where a minimum transmission rate is guaranteed. For example, the aerial robot has to track the ground vehicle to stay inside this cone. The control must be robust to noisy measurements. Thus, we use particle filters to obtain a better estimation of the receiver position and we design a control algorithm for the flying robot to enhance the transmission rate. Also, we consider the problem of pairing a ground sensor with an aerial vehicle, both equipped with a hybrid radio-frequency/optical wireless communication system. A challenge is positioning the flying robot within optical range when the sensor location is unknown. Thus, we take advantage of the hybrid communication scheme by developing a control strategy that uses the radio signal to guide the aerial platform to the ground sensor. Once the optical-based signal strength has achieved a certain threshold, the robot hovers within optical range. Finally, we investigate the problem of building an alliance of agents with different skills in order to satisfy the requirements imposed by a given task. We find this alliance, known also as a coalition, by using a bipartite graph in which edges represent the relation between agent capabilities and required resources for task execution. Using this graph, we build a coalition whose total capability resources can satisfy the task resource requirements. Also, we study the heterogeneity of the formed coalition to analyze how it is affected for instance by the amount of capability resources present in the agents

    준정형화된 환경에서 Look-ahead Point를 이용한 모방학습 기반 자율 내비게이션 방법

    Get PDF
    학위논문(박사) -- 서울대학교대학원 : 융합과학기술대학원 융합과학부(지능형융합시스템전공), 2023. 2. 박재흥.본 학위논문은 자율주행 차량이 주차장에서 위상지도와 비전 센서로 내비게이션을 수행하는 방법들을 제안합니다. 이 환경에서의 자율주행 기술은 완전 자율주행을 완성하는 데 필요하며, 편리하게 이용될 수 있습니다. 이 기술을 구현하기 위해, 경로를 생성하고 이를 현지화 데이터로 추종하는 방법이 일반적으로 연구되고 있습니다. 그러나, 주차장에서는 도로 간 간격이 좁고 장애물이 복잡하게 분포되어 있어 현지화 데이터를 정확하게 얻기 힘듭니다. 이는 실제 경로와 추종하는 경로 사이에 틀어짐을 발생시켜, 차량과 장애물 간 충돌 가능성을 높입니다. 따라서 현지화 데이터로 경로를 추종하는 대신, 낮은 비용을 가지는 비전 센서로 차량이 주행 가능 영역을 향해 주행하는 방법이 제안됩니다. 주차장에는 차선이 없고 다양한 정적/동적 장애물이 복잡하게 있어, 주행 가능/불가능한 영역을 구분하여 점유 격자 지도를 얻는 것이 필요합니다. 또한, 교차로를 내비게이션하기 위해, 전역 계획에 따른 하나의 갈래 도로만이 주행가능 영역으로 구분됩니다. 갈래 도로는 회전된 바운딩 박스 형태로 인식되며 주행가능 영역 인식과 함께 multi-task 네트워크를 통해 얻어집니다. 주행을 위해 모방학습이 사용되며, 이는 모델-기반 모션플래닝 방법보다 파라미터 튜닝 없이도 다양하고 복잡한 환경을 다룰 수 있고 부정확한 인식 결과에도 강인합니다. 아울러, 이미지에서 제어 명령을 구하는 기존 모방학습 방법과 달리, 점유 격자 지도에서 차량이 도달할 look-ahead point를 학습하는 새로운 모방학습 방법이 제안됩니다. 이 point를 사용함으로써, 모방 학습의 성능을 향상시키는 data aggregation (DAgger) 알고리즘을 별도의 조이스틱 없이 자율주행에 적용할 수 있으며, 전문가는 human-in-loop DAgger 훈련 과정에서도 최적의 행동을 잘 선택할 수 있습니다. 추가로, DAgger 변형 알고리즘들은 안전하지 않거나 충돌에 가까운 상황에 대한 데이터를 샘플링하여 DAgger 성능이 향상됩니다. 그러나, 전체 훈련 데이터셋에서 이 상황에 대한 데이터 비율이 적으면, 추가적인 DAgger 수행 및 사람의 노력이 요구됩니다. 이 문제를 다루기 위해, 가중 손실 함수를 사용하는 새로운 DAgger 훈련 방법인 WeightDAgger 알고리즘이 제안되며, 더 적은 DAgger 반복으로 앞서 언급 것과 유사한 상황에서 전문가의 행동을 더 정확하게 모방할 수 있습니다. DAgger를 동적 상황까지 확장하기 위해, 에이전트와 경쟁하는 적대적 정책이 제안되고, 이 정책을 DAgger 알고리즘에 적용하기 위한 훈련 프레임워크가 제안됩니다. 에이전트는 이전 DAgger 훈련 단계에서 훈련되지 않은 다양한 상황에 대해 훈련될 수 있을 뿐만 아니라 쉬운 상황에서 어려운 상황까지 점진적으로 훈련될 수 있습니다. 실내외 주차장에서의 차량 내비게이션 실험을 통해, 모델-기반 모션 플래닝 알고리즘의 한계 및 이를 다룰 수 있는 제안하는 모방학습 방법의 효용성이 분석됩니다. 또한, 시뮬레이션 실험을 통해, 제안된 WeightDAgger가 기존 DAgger 알고리즘들 보다 더 적은 DAgger 수행 및 사람의 노력이 필요함을 보이며, 적대적 정책을 이용한 DAgger 훈련 방법으로 동적 장애물을 안전하게 회피할 수 있음을 보입니다. 추가적으로, 부록에서는 비전 기반 자율 주차 시스템 및 주차 경로를 빠르게 생성할 수 있는 방법이 소개되어, 비전기반 주행 및 주차를 수행하는 자율 발렛 파킹 시스템이 완성됩니다.This thesis proposes methods for performing autonomous navigation with a topological map and a vision sensor in a parking lot. These methods are necessary to complete fully autonomous driving and can be conveniently used by humans. To implement them, a method of generating a path and tracking it with localization data is commonly studied. However, in such environments, the localization data is inaccurate because the distance between roads is narrow, and obstacles are distributed complexly, which increases the possibility of collisions between the vehicle and obstacles. Therefore, instead of tracking the path with the localization data, a method is proposed in which the vehicle drives toward a drivable area obtained by vision having a low-cost. In the parking lot, there are complicated various static/dynamic obstacles and no lanes, so it is necessary to obtain an occupancy grid map by segmenting the drivable/non-drivable areas. To navigating intersections, one branch road according to a global plan is configured as the drivable area. The branch road is detected in a shape of a rotated bounding box and is obtained through a multi-task network that simultaneously recognizes the drivable area. For driving, imitation learning is used, which can handle various and complex environments without parameter tuning and is more robust to handling an inaccurate perception result than model-based motion-planning algorithms. In addition, unlike existing imitation learning methods that obtain control commands from an image, a new imitation learning method is proposed that learns a look-ahead point that a vehicle will reach on an occupancy grid map. By using this point, the data aggregation (DAgger) algorithm that improves the performance of imitation learning can be applied to autonomous navigating without a separate joystick, and the expert can select the optimal action well even in the human-in-loop DAgger training process. Additionally, DAgger variant algorithms improve DAgger's performance by sampling data for unsafe or near-collision situations. However, if the data ratio for these situations in the entire training dataset is small, additional DAgger iteration and human effort are required. To deal with this problem, a new DAgger training method using a weighted loss function (WeightDAgger) is proposed, which can more accurately imitate the expert action in the aforementioned situations with fewer DAgger iterations. To extend DAgger to dynamic situations, an adversarial agent policy competing with the agent is proposed, and a training framework to apply this policy to DAgger is suggested. The agent can be trained for a variety of situations not trained in previous DAgger training steps, as well as progressively trained from easy to difficult situations. Through vehicle navigation experiments in real indoor and outdoor parking lots, limitations of the model-based motion-planning algorithms and the effectiveness of the proposed method to deal with them are analyzed. Besides, it is shown that the proposed WeightDAgger requires less DAgger performance and human effort than the existing DAgger algorithms, and the vehicle can safely avoid dynamic obstacles with the DAgger training framework using the adversarial agent policy. Additionally, the appendix introduces a vision-based autonomous parking system and a method to quickly generate the parking path, completing the vision-based autonomous valet parking system that performs driving as well as parking.1 INTRODUCTION 1 1.1 Autonomous Driving System and Environments 1 1.2 Motivation 4 1.3 Contributions of Thesis 6 1.4 Overview of Thesis 8 2 MULTI-TASK PERCEPTION NETWORK FOR VISION-BASED NAVIGATION 9 2.1 Introduction 9 2.1.1 Related Works 10 2.2 Proposed Method 13 2.2.1 Bird's-Eye-View Image Transform 14 2.2.2 Multi-Task Perception Network 15 2.2.2.1 Drivable Area Segmentation (Occupancy Grid Map (OGM)) 16 2.2.2.2 Rotated Road Bounding Box Detection 18 2.2.3 Intersection Decision 21 2.2.3.1 Road Occupancy Grid Map (OGMroad) 22 2.2.4 Merged Occupancy Grid Map (OGMmer) 23 2.3 Experiment 25 2.3.1 Experimental Setup 25 2.3.1.1 Autonomous Vehicle 25 2.3.1.2 Multi-task Network Setup 27 2.3.1.3 Model-based Branch Road Detection Method 29 2.3.2 Experimental Results 30 2.3.2.1 Quantitative Analysis of Multi-Task Network 30 2.3.2.2 Comparison of Branch Road Detection Method 31 2.4 Conclusion 34 3 DATA AGGREGATION (DAGGER) ALGORITHM WITH LOOK-AHEAD POINT FOR AUTONOMOUS DRIVING IN SEMI-STRUCTURED ENVIRONMENT 35 3.1 Introduction 35 3.2 Related Works & Background 41 3.2.1 DAgger Algorithms for Autonomous Driving 41 3.2.2 Behavior Cloning 42 3.2.3 DAgger Algorithm 43 3.3 Proposed Method 45 3.3.1 DAgger with Look-ahead Point Composition (State & Action) 45 3.3.2 Loss Function 49 3.3.3 Data-sampling Function in DAgger 50 3.3.4 Reasons to Use Look-ahead Point As Action 52 3.4 Experimental Setup 54 3.4.1 Driving Policy Network Training 54 3.4.2 Model-based Motion-Planning Algorithms 56 3.5 Experimental Result 57 3.5.1 Quantitative Analysis of Driving Policy 58 3.5.1.1 Collision Rate 58 3.5.1.2 Safe Distance Range Ratio 59 3.5.2 Qualitative Analysis of Driving Policy 60 3.5.2.1 Limitations of Tentacle Algorithm 60 3.5.2.2 Limitations of VVF Algorithm 61 3.5.2.3 Limitations of Both Tentacle and VVF 62 3.5.2.4 Driving Results on Noisy Occupancy Grid Map 63 3.5.2.5 Intersection Navigation 65 3.6 Conclusion 68 4 WEIGHT DAGGER ALGORITHM FOR REDUCING IMITATION LEARNING ITERATIONS 70 4.1 Introduction 70 4.2 Related Works & Background 71 4.3 Proposed Method 74 4.3.1 Weighted Loss Function in WeightDAgger 75 4.3.2 Weight Update Process in Entire Training Dataset 78 4.4 Experiments 80 4.4.1 Experimental Setup 80 4.4.2 Experimental Results 82 4.4.2.1 Ablation Study According to τ 82 4.4.2.2 Ablation Study According to ε 83 4.4.2.3 Ablation Study According to α 84 4.4.2.4 Driving Test Results 85 4.4.3 Walking Robot Experiments 86 4.5 Conclusion 87 5 DAGGER USING ADVERSARIAL AGENT POLICY FOR DYNAMIC SITUATIONS 89 5.1 Introduction 89 5.2 Related Works & Background 91 5.2.1 Motion-planning Algorithms for Dynamic Situations 91 5.2.2 DAgger Algorithm for Dynamic Situation 93 5.3 Proposed Method 95 5.3.1 DAgger Training Framework Using Adversarial Agent Policy 95 5.3.2 Applying to Oncoming Dynamic Obstacle Avoidance Task 97 5.3.2.1 Ego Agent Policy 98 5.3.2.2 Adversarial Agent Policy 100 5.4 Experiments 101 5.4.1 Experimental Setup 101 5.4.1.1 Ego Agent Policy Training 102 5.4.1.2 Adversarial Agent Policy Training 103 5.4.2 Experimental Result 103 5.4.2.1 Performance of Adversarial Agent Policy 103 5.4.2.2 Ego Agent Policy Performance Comparisons Trained with / without Adversarial Agent Policy 104 5.5 Conclusion 106 6 CONCLUSIONS 107 Appendix A 110 A.1 Vision-based Re-plannable Autonomous Parking System 110 A.1.1 Parking Spot Detection 112 A.1.2 Re-planning Method 113 A.2 Biased Target-tree* with RRT* Algorithm for Fast Parking Path Planning 115 A.2.1 Introduction 115 A.2.2 Proposed Method 117 A.2.3 Experiments 119 Abstract (In Korean) 143 Acknowledgement 145박

    Percepção do ambiente urbano e navegação usando visão robótica : concepção e implementação aplicado à veículo autônomo

    Get PDF
    Orientadores: Janito Vaqueiro Ferreira, Alessandro Corrêa VictorinoTese (doutorado) - Universidade Estadual de Campinas, Faculdade de Engenharia MecânicaResumo: O desenvolvimento de veículos autônomos capazes de se locomover em ruas urbanas pode proporcionar importantes benefícios na redução de acidentes, no aumentando da qualidade de vida e também na redução de custos. Veículos inteligentes, por exemplo, frequentemente baseiam suas decisões em observações obtidas a partir de vários sensores tais como LIDAR, GPS e câmeras. Atualmente, sensores de câmera têm recebido grande atenção pelo motivo de que eles são de baixo custo, fáceis de utilizar e fornecem dados com rica informação. Ambientes urbanos representam um interessante mas também desafiador cenário neste contexto, onde o traçado das ruas podem ser muito complexos, a presença de objetos tais como árvores, bicicletas, veículos podem gerar observações parciais e também estas observações são muitas vezes ruidosas ou ainda perdidas devido a completas oclusões. Portanto, o processo de percepção por natureza precisa ser capaz de lidar com a incerteza no conhecimento do mundo em torno do veículo. Nesta tese, este problema de percepção é analisado para a condução nos ambientes urbanos associado com a capacidade de realizar um deslocamento seguro baseado no processo de tomada de decisão em navegação autônoma. Projeta-se um sistema de percepção que permita veículos robóticos a trafegar autonomamente nas ruas, sem a necessidade de adaptar a infraestrutura, sem o conhecimento prévio do ambiente e considerando a presença de objetos dinâmicos tais como veículos. Propõe-se um novo método baseado em aprendizado de máquina para extrair o contexto semântico usando um par de imagens estéreo, a qual é vinculada a uma grade de ocupação evidencial que modela as incertezas de um ambiente urbano desconhecido, aplicando a teoria de Dempster-Shafer. Para a tomada de decisão no planejamento do caminho, aplica-se a abordagem dos tentáculos virtuais para gerar possíveis caminhos a partir do centro de referencia do veículo e com base nisto, duas novas estratégias são propostas. Em primeiro, uma nova estratégia para escolher o caminho correto para melhor evitar obstáculos e seguir a tarefa local no contexto da navegação hibrida e, em segundo, um novo controle de malha fechada baseado na odometria visual e o tentáculo virtual é modelado para execução do seguimento de caminho. Finalmente, um completo sistema automotivo integrando os modelos de percepção, planejamento e controle são implementados e validados experimentalmente em condições reais usando um veículo autônomo experimental, onde os resultados mostram que a abordagem desenvolvida realiza com sucesso uma segura navegação local com base em sensores de câmeraAbstract: The development of autonomous vehicles capable of getting around on urban roads can provide important benefits in reducing accidents, in increasing life comfort and also in providing cost savings. Intelligent vehicles for example often base their decisions on observations obtained from various sensors such as LIDAR, GPS and Cameras. Actually, camera sensors have been receiving large attention due to they are cheap, easy to employ and provide rich data information. Inner-city environments represent an interesting but also very challenging scenario in this context, where the road layout may be very complex, the presence of objects such as trees, bicycles, cars might generate partial observations and also these observations are often noisy or even missing due to heavy occlusions. Thus, perception process by nature needs to be able to deal with uncertainties in the knowledge of the world around the car. While highway navigation and autonomous driving using a prior knowledge of the environment have been demonstrating successfully, understanding and navigating general inner-city scenarios with little prior knowledge remains an unsolved problem. In this thesis, this perception problem is analyzed for driving in the inner-city environments associated with the capacity to perform a safe displacement based on decision-making process in autonomous navigation. It is designed a perception system that allows robotic-cars to drive autonomously on roads, without the need to adapt the infrastructure, without requiring previous knowledge of the environment and considering the presence of dynamic objects such as cars. It is proposed a novel method based on machine learning to extract the semantic context using a pair of stereo images, which is merged in an evidential grid to model the uncertainties of an unknown urban environment, applying the Dempster-Shafer theory. To make decisions in path-planning, it is applied the virtual tentacle approach to generate possible paths starting from ego-referenced car and based on it, two news strategies are proposed. First one, a new strategy to select the correct path to better avoid obstacles and to follow the local task in the context of hybrid navigation, and second, a new closed loop control based on visual odometry and virtual tentacle is modeled to path-following execution. Finally, a complete automotive system integrating the perception, path-planning and control modules are implemented and experimentally validated in real situations using an experimental autonomous car, where the results show that the developed approach successfully performs a safe local navigation based on camera sensorsDoutoradoMecanica dos Sólidos e Projeto MecanicoDoutor em Engenharia Mecânic

    Planning and control of autonomous mobile robots for intralogistics: Literature review and research agenda

    Get PDF
    Autonomous mobile robots (AMR) are currently being introduced in many intralogistics operations, like manufacturing, warehousing, cross-docks, terminals, and hospitals. Their advanced hardware and control software allow autonomous operations in dynamic environments. Compared to an automated guided vehicle (AGV) system in which a central unit takes control of scheduling, routing, and dispatching decisions for all AGVs, AMRs can communicate and negotiate independently with other resources like machines and systems and thus decentralize the decision-making process. Decentralized decision-making allows the system to react dynamically to changes in the system state and environment. These developments have influenced the traditional methods and decision-making processes for planning and control. This study identifies and classifies research related to the planning and control of AMRs in intralogistics. We provide an extended literature review that highlights how AMR technological advances affect planning and control decisions. We contribute to the literature by introducing an AMR planning and control framework t

    Machine Learning Meets Advanced Robotic Manipulation

    Full text link
    Automated industries lead to high quality production, lower manufacturing cost and better utilization of human resources. Robotic manipulator arms have major role in the automation process. However, for complex manipulation tasks, hard coding efficient and safe trajectories is challenging and time consuming. Machine learning methods have the potential to learn such controllers based on expert demonstrations. Despite promising advances, better approaches must be developed to improve safety, reliability, and efficiency of ML methods in both training and deployment phases. This survey aims to review cutting edge technologies and recent trends on ML methods applied to real-world manipulation tasks. After reviewing the related background on ML, the rest of the paper is devoted to ML applications in different domains such as industry, healthcare, agriculture, space, military, and search and rescue. The paper is closed with important research directions for future works

    Human-Robot Collaboration in Automotive Assembly

    Get PDF
    In the past decades, automation in the automobile production line has significantly increased the efficiency and quality of automotive manufacturing. However, in the automotive assembly stage, most tasks are still accomplished manually by human workers because of the complexity and flexibility of the tasks and the high dynamic unconstructed workspace. This dissertation is proposed to improve the level of automation in automotive assembly by human-robot collaboration (HRC). The challenges that eluded the automation in automotive assembly including lack of suitable collaborative robotic systems for the HRC, especially the compact-size high-payload mobile manipulators; teaching and learning frameworks to enable robots to learn the assembly tasks, and how to assist humans to accomplish assembly tasks from human demonstration; task-driving high-level robot motion planning framework to make the trained robot intelligently and adaptively assist human in automotive assembly tasks. The technical research toward this goal has resulted in several peer-reviewed publications. Achievements include: 1) A novel collaborative lift-assist robot for automotive assembly; 2) Approaches of vision-based robot learning of placing tasks from human demonstrations in assembly; 3) Robot learning of assembly tasks and assistance from human demonstrations using Convolutional Neural Network (CNN); 4) Robot learning of assembly tasks and assistance from human demonstrations using Task Constraint-Guided Inverse Reinforcement Learning (TC-IRL); 5) Robot learning of assembly tasks from non-expert demonstrations via Functional Objective-Oriented Network (FOON); 6) Multi-model sampling-based motion planning for trajectory optimization with execution consistency in manufacturing contexts. The research demonstrates the feasibility of a parallel mobile manipulator, which introduces novel conceptions to industrial mobile manipulators for smart manufacturing. By exploring the Robot Learning from Demonstration (RLfD) with both AI-based and model-based approaches, the research also improves robots’ learning capabilities on collaborative assembly tasks for both expert and non-expert users. The research on robot motion planning and control in the dissertation facilitates the safety and human trust in industrial robots in HRC
    corecore