164 research outputs found

    XTENTH-CAR: A Proportionally Scaled Experimental Vehicle Platform for Connected Autonomy and All-Terrain Research

    Full text link
    Connected Autonomous Vehicles (CAVs) are key components of the Intelligent Transportation System (ITS), and all-terrain Autonomous Ground Vehicles (AGVs) are indispensable tools for a wide range of applications such as disaster response, automated mining, agriculture, military operations, search and rescue missions, and planetary exploration. Experimental validation is a requisite for CAV and AGV research, but requires a large, safe experimental environment when using full-size vehicles which is time-consuming and expensive. To address these challenges, we developed XTENTH-CAR (eXperimental one-TENTH scaled vehicle platform for Connected autonomy and All-terrain Research), an open-source, cost-effective proportionally one-tenth scaled experimental vehicle platform governed by the same physics as a full-size on-road vehicle. XTENTH-CAR is equipped with the best-in-class NVIDIA Jetson AGX Orin System on Module (SOM), stereo camera, 2D LiDAR and open-source Electronic Speed Controller (ESC) with drivers written for both versions of the Robot Operating System (ROS 1 & ROS 2) to facilitate experimental CAV and AGV perception, motion planning and control research, that incorporate state-of-the-art computationally expensive algorithms such as Deep Reinforcement Learning (DRL). XTENTH-CAR is designed for compact experimental environments, and aims to increase the accessibility of experimental CAV and AGV research with low upfront costs, and complete Autonomous Vehicle (AV) hardware and software architectures similar to the full-sized X-CAR experimental vehicle platform, enabling efficient cross-platform development between small-scale and full-scale vehicles.Comment: ©\copyright 2023 ASME. This work has been accepted to ASME for publicatio

    Implementing and Tuning an Autonomous Racing Car Testbed

    Get PDF
    Achieving safe autonomous driving is far from a vision at present days, with many examples like Uber, Google and the most famous of all Tesla, as they successfully deployed self driving cars around the world. Researchers and engineers have been putting tremendous efforts and will continue to do so in the following years into developing safe and precise control algorithms and technologies that will be included in future self driving cars. Besides these well known autonomous car deployments, some focus has also been put into autonomous racing competitions, for example the Roborace. The fact is that although significant progress that has been made, testing on real size cars in real environments requires immense financial support, making it impossible for many research groups to enter the game. Consequently, interesting alternatives appeared, such as the F1 Tenth, which challenges students, researchers and engineers to embrace in a low cost autonomous racing competition while developing control algorithms, that rely on sensors and strategies used in real life applications. This thesis focus on the comparison of different control algorithms and their effectiveness, that are present in a racing aspect of the F1 Tenth competition. In this thesis, efforts were put into developing a robotic autonomous car, relying on Robot Operative System, ROS, that not only meet the specifications from the F1 Tenth rules, but also allowed to establish a testbed for different future autonomous driving research.Obter uma condução autónoma segura está longe de uma visão dos dias de hoje, com exemplos como a Uber, Google e o mais famoso deles todos, a Tesla, que já foram globalmente introduzidos com sucesso. Investigadores e engenheiros têm colocado um empenho tremendo e vão continuar a fazê-lo nos próximos anos, a desenvolver algoritmos de controlo precisos e seguros, bem como tecnologias que serão colocados nos carros autónomos do futuro. Para além destes casos de sucesso bem conhecidos, algum foco tem sido colocado em competições de corridas de carros autónomos, como por exemplo o Roborace. O facto ´e que apesar do progresso significante que tem sido feito, fazer testes em carros reais em cenários verdadeiros, requer grande investimento financeiro, tornando impossível para muitos grupos de investigação investir na área. Consequentemente, apareceram alternativas relevantes, tal como o F1 Tenth, que desafia estudantes, investigadores e engenheiros a aderir a uma competição de baixos custos de corridas autónomas, enquanto desenvolvem algoritmos de controlo, que dependem de sensores e estratégias usadas em aplicações reais. Esta tese foca-se na comparação de diferentes algoritmos de controlo e na eficácia dos mesmos, que estão presentes num cenário de corrida da competição do F1 Tenth. Nesta tese, foram colocados muitos esforços para o desenvolvimento de um carro autónomo robótico, baseado em Robot Operative System, ROS, que não só vai de encontro `as especificações do F1 Tenth, mas que também permita estabelecer uma plataforma para futuras investigações de condução autónoma

    Imitation Accelerated Q-learning on a Simulated Formula Student Driverless Racecar

    Get PDF
    Master's thesis in Information- and communication technology (IKT590)In the international Formula Student competition, only a handful compete in the driverless category. Most of them using expensive hardware such as LIDAR’s. By leveraging reinforcement learning, a cheaper camera based system can be created .In order to train this system a simulator based on a fork of Microsoft’s AirSim by Formula Technion was used. A virtual replica of a Formula Student car designed for 2020 by Align Racing UiA, functioned as the test vehicle. In order to decrease the required training time, a pre-trained imitation learning network was used. This was implemented into a Deep Q-Learning network in four different methods. The most successful method was able to accelerate the learning process by 36%

    Survey of Agile navigation algorithms for robot ground vehicles

    Get PDF
    En aquest treball, diversos mètodes orientats a la navegació àgil de vehicles robòtics terrestres son comparats. Primerament, es realitza un estudi de publicacions per a identificar els mètodes pertanyents a l'estat de la tècnica més adequats per a ser comparats amb un mètode de navegació àgil (''CarPlanner'') desenvolupat al Autonomous Robotics and Perception Group (ARPG). Diferents mètodes són examinats i implementats en un ambient simulat. Aquests mètodes són evaluats basant-se en la seva eficàcia navegant el vehicle robòtic terrestre en una pista que té salts, sotracs i bermes. L'ambient simulat conté un vehicle terrestre de quatre rodes amortiguades amb geometria d'Ackermann, el qual ha de conduïr per un terreny amb dinàmica de fricció no linear. Els criteris per a evaluar els mètodes inclouen l'habilitat per a utilitzar les dinàmiques del vehicle per a recórrer la pista de manera ràpida i segura. Finalment, el mètode més apropiat i amb millor resultats és implementat al cotxe NinjaCar d'escala 1:8 del laboratori ARPG i comparat amb l'algoritme CarPlanner mitjançant experimentació física.In this work, several state-of-the-art methods for agile navigation of robot ground vehicles are compared. First, a survey of the literature is performed to identify the state-of-the-art and most appropriate methods for comparing to an agile navigation method (''CarPlanner'') developed in the Autonomous Robotics and Perception Group (ARPG). Several methods are reviewed and implemented in a dynamic vehicle simulation environment. These methods are evaluated on their efficacy of navigating a robot ground vehicle around a race track featuring jumps, bumps, and berms. The simulation environment features a four-wheeled, Ackermann-style ground vehicle with suspension and austere terrain with nonlinear friction dynamics. Criteria for evaluating the methods includes the ability of the method at utilizing the vehicle dynamics to quickly and safely traverse the track. Finally, the most appropriate and best-performing method is implemented on ARPG's 1/8th-scale NinjaCar vehicle platform and compared in physical experimentation to ARPG's CarPlanner algorithm

    Remote Monitoring and Teleoperation of Autonomous Vehicles - Is Virtual Reality an Option?

    Full text link
    While the promise of autonomous vehicles has led to significant scientific and industrial progress, fully automated, SAE level 5 conform cars will likely not see mass adoption anytime soon. Instead, in many applications, human supervision, such as remote monitoring and teleoperation, will be required for the foreseeable future. While Virtual Reality (VR) has been proposed as one potential interface for teleoperation, its benefits and drawbacks over physical monitoring and teleoperation solutions have not been thoroughly investigated. To this end, we contribute three user studies, comparing and quantifying the performance of and subjective feedback for a VR-based system with an existing monitoring and teleoperation system, which is in industrial use today. Through these three user studies, we contribute to a better understanding of future virtual monitoring and teleoperation solutions for autonomous vehicles. The results of our first user study (n=16) indicate that a VR interface replicating the physical interface does not outperform the physical interface. It also quantifies the negative effects that combined monitoring and teleoperating tasks have on users irrespective of the interface being used. The results of the second user study (n=24) indicate that the perceptual and ergonomic issues caused by VR outweigh its benefits, like better concentration through isolation. The third follow-up user study (n=24) specifically targeted the perceptual and ergonomic issues of VR; the subjective feedback of this study indicates that newer-generation VR headsets have the potential to catch up with the current physical displays

    Autonomous Obstacle Collision Avoidance System for UAVs in rescue operations

    Get PDF
    The Unmanned Aerial Vehicles (UAV) and its applications are growing for both civilian and military purposes. The operability of an UAV proved that some tasks and operations can be done easily and at a good cost-efficiency ratio. Nowadays, an UAV can perform autonomous tasks, by using waypoint mission navigation using a GPS sensor. These autonomous tasks are also called missions. It is very useful to certain UAV applications, such as meteorology, vigilance systems, agriculture, environment mapping and search and rescue operations. One of the biggest problems that an UAV faces is the possibility of collision with other objects in the flight area. This can cause damage to surrounding area structures, humans or the UAV itself. To avoid this, an algorithm was developed and implemented in order to prevent UAV collision with other objects. “Sense and Avoid” algorithm was developed as a system for UAVs to avoid objects in collision course. This algorithm uses a laser distance sensor called LiDAR (Light Detection and Ranging), to detect objects facing the UAV in mid-flights. This light sensor is connected to an on-board hardware, Pixhawk’s flight controller, which interfaces its communications with another hardware: Raspberry Pi. Communications between Ground Control Station or RC controller are made via Wi-Fi telemetry or Radio telemetry. “Sense and Avoid” algorithm has two different modes: “Brake” and “Avoid and Continue”. These modes operate in different controlling methods. “Brake” mode is used to prevent UAV collisions with objects when controlled by a human operator that is using a RC controller. “Avoid and Continue” mode works on UAV’s autonomous modes, avoiding collision with objects in sight and proceeding with the ongoing mission. In this dissertation, some tests were made in order to evaluate the “Sense and Avoid” algorithm’s overall performance. These tests were done in two different environments: A 3D simulated environment and a real outdoor environment. Both modes worked successfully on a simulated 3D environment, and “Brake” mode on a real outdoor, proving its concepts.Os veículos aéreos não tripulados (UAV) e as suas aplicações estão cada vez mais a ser utilizadas para fins civis e militares. A operacionalidade de um UAV provou que algumas tarefas e operações podem ser feitas facilmente e com uma boa relação de custo-benefício. Hoje em dia, um UAV pode executar tarefas autonomamente, usando navegação por waypoints e um sensor de GPS. Essas tarefas autónomas também são designadas de missões. As missões autónomas poderão ser usadas para diversos propósitos, tais como na meteorologia, sistemas de vigilância, agricultura, mapeamento de áreas e operações de busca e salvamento. Um dos maiores problemas que um UAV enfrenta é a possibilidade de colisão com outros objetos na área, podendo causar danos às estruturas envolventes, aos seres humanos ou ao próprio UAV. Para evitar tais ocorrências, foi desenvolvido e implementado um algoritmo para evitar a colisão de um UAV com outros objetos. O algoritmo "Sense and Avoid" foi desenvolvido como um sistema para UAVs de modo a evitar objetos em rota de colisão. Este algoritmo utiliza um sensor de distância a laser chamado LiDAR (Light Detection and Ranging), para detetar objetos que estão em frente do UAV. Este sensor é ligado a um hardware de bordo, a controladora de voo Pixhawk, que realiza as suas comunicações com outro hardware complementar: o Raspberry Pi. As comunicações entre a estação de controlo ou o operador de comando RC são feitas via telemetria Wi-Fi ou telemetria por rádio. O algoritmo "Sense and Avoid" tem dois modos diferentes: o modo "Brake" e modo "Avoid and Continue". Estes modos operam em diferentes métodos de controlo do UAV. O modo "Brake" é usado para evitar colisões com objetos quando controlado via controlador RC por um operador humano. O modo "Avoid and Continue" funciona nos modos de voo autónomos do UAV, evitando colisões com objetos à vista e prosseguindo com a missão em curso. Nesta dissertação, alguns testes foram realizados para avaliar o desempenho geral do algoritmo "Sense and Avoid". Estes testes foram realizados em dois ambientes diferentes: um ambiente de simulação em 3D e um ambiente ao ar livre. Ambos os modos obtiveram funcionaram com sucesso no ambiente de simulação 3D e o mode “Brake” no ambiente real, provando os seus conceitos
    corecore