2,549 research outputs found

    Autonomous RC Car Platform

    Get PDF
    This project explores building an autonomous research robot on a 1/10 scale RC car platform. The goals of the project were to build an easy to use system that allowed for the exploration of techniques such as localization, object detection, mapping, and more. The completed robot consists of a self-contained RC car, running on battery power, that uses a camera, lidar, inertial measurement unit, and other sensors to observe the environment. Completed research explored pose estimation based on combining dead reckoning, inertial measurement unit readings, and visual odometry in an Extended Kalman Filter. The result of this project included the RC car and a build guide on replicating the process for future students

    Indoor localization using visual information and passive landmarks

    Get PDF
    This thesis proposes a localization algorithm for Automatically Guided Vehicles (AGVs) based on a vision system and simple passive markers. The pose is estimated using trilateration and triangulation techniques. Then the results are combined with heterogeneous data provided by odometry using an Extended Kalman Filter. The tests have shown that even with a non fully optimized algorithm, a precision of 0.2m can be reached, confirming the validity of this technologyope

    Estudo e adaptação do simulador de condução autónoma CARLA para o ATLASCAR2

    Get PDF
    Within the scope of the ATLASCAR2 project, this dissertation is based on studying and integrating the already existing autonomous driving assistance simulator named CARLA that implements an interface based on ROS to replicate the ATLASCAR2 setup in the simulation. The idea of using an autonomous driving simulator was proposed as a way to simplify the data aquisition process for the ATLASCAR2 since this process keeps on getting more and more difficult due to factors such as the complexity in the setup and the calibration processes of the installed sensors on the ATLASCAR2, as well as other factors such as the hardware interface and the time that is required to perform a single data aquisition using the ATLASCAR2. This tool can produce realistic scenarios and can be used for testing out the algorithms that are going to be implemented in the ATLASCAR2 in controlled environments, offering a degree of ground truth for these algorithms that can be used to evaluate the performance in these environments before implementing them in the real platform. The replication of the ATLASCAR2 setup process as well as the algorithms involved in CARLA will be discussed in further detail during this dissertation which include sections talking about the replication process and the algorithms involved, showing the results of the ATLASCAR2 setup implementation in CARLA as well as some other results produced from experiments with CARLA simulated data which include the use of computer vision algorithms as well as other algorithms that are currently being used in the ATLASCAR2.No âmbito do projecto do ATLASCAR2, esta dissertação baseia-se no estudo e integração do simulador já existente para assistência à condução autónoma CARLA, que implementa uma interface baseada em ROS para replicar o setup do ATLASCAR2 dentro da simulação. A ideia do uso de um simulador de condução autónoma como CARLA foi proposta a _m de simplificar a tarefa de aquisição de dados para o ATLASCAR2 visto que esta é uma tarefa que se vai tornando cada vez mais difícil devido a factores como a complexidade ao nível do setup e a calibração dos sensores instalados no ATLASCAR2, assim como outros factores como a interface de hardware e o tempo que é necessário para fazer uma aquisição de dados usando o ATLASCAR2. Esta ferramenta consegue produzir cenários próximos da realidade e pode ser usada para testar os algoritmos que vão ser usados no ATLASCAR2 em ambientes controlados oferencendo um método de validação que pode ser usado para avaliar o desempenho destes algoritmos nesses ambientes antes de os testar na plataforma real. O processo de replicação do setup do ATLASCAR2 e os algoritmos envolvidos no CARLA serão descritos com mais detalhe durante a dissertação que inclui secções onde é feita a descrição do processo de replicação do setup e dos algoritmos, a demonstração de resultados produzidos através da implementação do setup do ATLASCAR2 no CARLA, assim como alguns resultados produzidos de algumas experiências com dados simulados pelo CARLA que inclui a utilização de algoritmos de visão e outros algoritmos que estão a ser utilizados no ATLASCAR2.Mestrado em Engenharia de Computadores e Telemátic

    Portable and Scalable In-vehicle Laboratory Instrumentation for the Design of i-ADAS

    Get PDF
    According to the WHO (World Health Organization), world-wide deaths from injuries are projected to rise from 5.1 million in 1990 to 8.4 million in 2020, with traffic-related incidents as the major cause for this increase. Intelligent, Advanced Driving Assis­ tance Systems (i-ADAS) provide a number of solutions to these safety challenges. We developed a scalable in-vehicle mobile i-ADAS research platform for the purpose of traffic context analysis and behavioral prediction designed for understanding fun­ damental issues in intelligent vehicles. We outline our approach and describe the in-vehicle instrumentation

    Long Length Contaminated Equipment Retrieval System Receiver Trailer and Transport Trailer Operations and Maintenance Manual

    Full text link

    Advances in Human Robot Interaction for Cloud Robotics applications

    Get PDF
    In this thesis are analyzed different and innovative techniques for Human Robot Interaction. The focus of this thesis is on the interaction with flying robots. The first part is a preliminary description of the state of the art interactions techniques. Then the first project is Fly4SmartCity, where it is analyzed the interaction between humans (the citizen and the operator) and drones mediated by a cloud robotics platform. Then there is an application of the sliding autonomy paradigm and the analysis of different degrees of autonomy supported by a cloud robotics platform. The last part is dedicated to the most innovative technique for human-drone interaction in the User’s Flying Organizer project (UFO project). This project wants to develop a flying robot able to project information into the environment exploiting concepts of Spatial Augmented Realit

    Mechatronic Systems

    Get PDF
    Mechatronics, the synergistic blend of mechanics, electronics, and computer science, has evolved over the past twenty five years, leading to a novel stage of engineering design. By integrating the best design practices with the most advanced technologies, mechatronics aims at realizing high-quality products, guaranteeing at the same time a substantial reduction of time and costs of manufacturing. Mechatronic systems are manifold and range from machine components, motion generators, and power producing machines to more complex devices, such as robotic systems and transportation vehicles. With its twenty chapters, which collect contributions from many researchers worldwide, this book provides an excellent survey of recent work in the field of mechatronics with applications in various fields, like robotics, medical and assistive technology, human-machine interaction, unmanned vehicles, manufacturing, and education. We would like to thank all the authors who have invested a great deal of time to write such interesting chapters, which we are sure will be valuable to the readers. Chapters 1 to 6 deal with applications of mechatronics for the development of robotic systems. Medical and assistive technologies and human-machine interaction systems are the topic of chapters 7 to 13.Chapters 14 and 15 concern mechatronic systems for autonomous vehicles. Chapters 16-19 deal with mechatronics in manufacturing contexts. Chapter 20 concludes the book, describing a method for the installation of mechatronics education in schools

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    Feasibility of developing a portable driver performance data acquisition system for human factors research: Design specifications. Volume 3

    Full text link
    corecore