1,293 research outputs found

    Perception Intelligence Integrated Vehicle-to-Vehicle Optical Camera Communication.

    Get PDF
    Ubiquitous usage of cameras and LEDs in modern road and aerial vehicles open up endless opportunities for novel applications in intelligent machine navigation, communication, and networking. To this end, in this thesis work, we hypothesize the benefit of dual-mode usage of vehicular built-in cameras through novel machine perception capabilities combined with optical camera communication (OCC). Current key conception of understanding a line-of-sight (LOS) scenery is from the aspect of object, event, and road situation detection. However, the idea of blending the non-line-of-sight (NLOS) information with the LOS information to achieve a see-through vision virtually is new. This improves the assistive driving performance by enabling a machine to see beyond occlusion. Another aspect of OCC in the vehicular setup is to understand the nature of mobility and its impact on the optical communication channel quality. The research questions gathered from both the car-car mobility modelling, and evaluating a working setup of OCC communication channel can also be inherited to aerial vehicular situations like drone-drone OCC. The aim of this thesis is to answer the research questions along these new application domains, particularly, (i) how to enable a virtual see-through perception in the car assisting system that alerts the human driver about the visible and invisible critical driving events to help drive more safely, (ii) how transmitter-receiver cars behaves while in the mobility and the overall channel performance of OCC in motion modality, (iii) how to help rescue lost Unmanned Aerial Vehicles (UAVs) through coordinated localization with fusion of OCC and WiFi, (iv) how to model and simulate an in-field drone swarm operation experience to design and validate UAV coordinated localization for group of positioning distressed drones. In this regard, in this thesis, we present the end-to-end system design, proposed novel algorithms to solve the challenges in applying such a system, and evaluation results through experimentation and/or simulation

    A Testing and Experimenting Environment for Microscopic Traffic Simulation Utilizing Virtual Reality and Augmented Reality

    Get PDF
    Microscopic traffic simulation (MTS) is the emulation of real-world traffic movements in a virtual environment with various traffic entities. Typically, the movements of the vehicles in MTS follow some predefined algorithms, e.g., car-following models, lane changing models, etc. Moreover, existing MTS models only provide a limited capability of two- and/or three-dimensional displays that often restrict the user’s viewpoint to a flat screen. Their downscaled scenes neither provide a realistic representation of the environment nor allow different users to simultaneously experience or interact with the simulation model from different perspectives. These limitations neither allow the traffic engineers to effectively disseminate their ideas to various stakeholders of different backgrounds nor allow the analysts to have realistic data about the vehicle or pedestrian movements. This dissertation intends to alleviate those issues by creating a framework and a prototype for a testing environment where MTS can have inputs from user-controlled vehicles and pedestrians to improve their traffic entity movement algorithms as well as have an immersive M3 (multi-mode, multi-perspective, multi-user) visualization of the simulation using Virtual Reality (VR) and Augmented Reality (AR) technologies. VR environments are created using highly realistic 3D models and environments. With modern game engines and hardware available on the market, these VR applications can provide a highly realistic and immersive experience for a user. Different experiments performed by real users in this study prove that utilizing VR technology for different traffic related experiments generated much more favorable results than the traditional displays. Moreover, using AR technologies for pedestrian studies is a novel approach that allows a user to walk in the real world and the simulation world at a one-to-one scale. This capability opens a whole new avenue of user experiment possibilities. On top of that, the in-environment communication chat system will allow researchers to perform different Advanced Driver Assistance System (ADAS) studies without ever needing to leave the simulation environment. Last but not least, the distributed nature of the framework enables users to participate from different geographic locations with their choice of display device (desktop, smartphone, VR, or AR). The prototype developed for this dissertation is readily available on a test webpage, and a user can easily download the prototype application without needing to install anything. The user also can run the remote MTS server and then connect their client application to the server

    How Much Space Is Required? Effect of Distance, Content, and Color on External Human–Machine Interface Size

    Get PDF
    The communication of an automated vehicle (AV) with human road users can be realized by means of an external human–machine interface (eHMI), such as displays mounted on the AV’s surface. For this purpose, the amount of time needed for a human interaction partner to perceive the AV’s message and to act accordingly has to be taken into account. Any message displayed by an AV must satisfy minimum size requirements based on the dynamics of the road traffic and the time required by the human. This paper examines the size requirements of displayed text or symbols for ensuring the legibility of a message. Based on the limitations of available package space in current vehicle models and the ergonomic requirements of the interface design, an eHMI prototype was developed. A study involving 30 participants varied the content type (text and symbols) and content color (white, red, green) in a repeated measures design. We investigated the influence of content type on content size to ensure legibility from a constant distance. We also analyzed the influence of content type and content color on the human detection range. The results show that, at a fixed distance, text has to be larger than symbols in order to maintain legibility. Moreover, symbols can be discerned from a greater distance than text. Color had no content overlapping effect on the human detection range. In order to ensure the maximum possible detection range among human road users, an AV should display symbols rather than text. Additionally, the symbols could be color-coded for better message comprehension without affecting the human detection range. Document type: Articl

    Design and Evaluation of Data Dissemination Algorithms to Improve Object Detection in Autonomous Driving Networks

    Get PDF
    In the last few years, the amount of information that is produced by an autonomous vehicle is increasing proportionally with the number and resolution of sensors that cars are equipped with. Cars can be provided with cameras and Light Detection and Ranging (LiDAR) sensors, respectively needed to obtain a two-dimensional (2D) and three-dimensional (3D) representation of the environment. Due to the huge amount of data that multiple self-driving vehicles can push over a communication network, how these data are selected, stored, and sent is crucial. Various techniques have been developed to manage vehicular data; for example, compression can be used to alleviate the burden of data transmission over bandwidth-constrained channels and facilitate real-time communications. However, aggressive levels of compression may corrupt automotive data, and prevent proper detection of critical road objects in the scene. Along these lines, in this thesis, we studied the trade-off between compression efficiency and accuracy. To do so, we considered synthetic automotive data generated from the SELMA dataset. Then, we compared the performance of several state-of-the-art algorithms, based on machine learning, for compressing and detecting objects on LiDAR point clouds. We were able to reduce the point cloud by tens to hundreds times without any significant loss in the final detection accuracy. In a second phase, we focused our attention on the optimization of the number and type of sensors that are more meaningful to object detection operations. Notably, we tested our dataset on a sensor fusion algorithm that can combine both 2D and 3D data to have a better understanding of the environment. The results show that, although sensor fusion always achieves more accurate detections, using 3D inputs only can obtain similar results for large objects while mitigating the burden on the channel.In the last few years, the amount of information that is produced by an autonomous vehicle is increasing proportionally with the number and resolution of sensors that cars are equipped with. Cars can be provided with cameras and Light Detection and Ranging (LiDAR) sensors, respectively needed to obtain a two-dimensional (2D) and three-dimensional (3D) representation of the environment. Due to the huge amount of data that multiple self-driving vehicles can push over a communication network, how these data are selected, stored, and sent is crucial. Various techniques have been developed to manage vehicular data; for example, compression can be used to alleviate the burden of data transmission over bandwidth-constrained channels and facilitate real-time communications. However, aggressive levels of compression may corrupt automotive data, and prevent proper detection of critical road objects in the scene. Along these lines, in this thesis, we studied the trade-off between compression efficiency and accuracy. To do so, we considered synthetic automotive data generated from the SELMA dataset. Then, we compared the performance of several state-of-the-art algorithms, based on machine learning, for compressing and detecting objects on LiDAR point clouds. We were able to reduce the point cloud by tens to hundreds times without any significant loss in the final detection accuracy. In a second phase, we focused our attention on the optimization of the number and type of sensors that are more meaningful to object detection operations. Notably, we tested our dataset on a sensor fusion algorithm that can combine both 2D and 3D data to have a better understanding of the environment. The results show that, although sensor fusion always achieves more accurate detections, using 3D inputs only can obtain similar results for large objects while mitigating the burden on the channel

    Software architecture for self-driving navigation

    Get PDF
    Mención Internacional en el título de doctorThis dissertation is based on the development of the navigation software architecture for self-driving vehicles. The goal is very wide in terms of multidisciplinary fields over the different solutions provided, however, functional solutions for the implementation according to the software architecture has been proved and tested in the real research platform iCab (Intelligent Campus Automobile). The problems that the autonomous vehicles have to face are based accordingly as the three questions of navigation that each vehicle has to ask: Where am I, where should I go, and how can I go there. These questions are followed by the corresponding modules to solve that are divided into localization, planning, mapping, perception and control in addition to multitasking allocation, communication and Human-Machine Interaction. One more module is the self-awareness which is an optimal solution for detecting problems in the earliest stage. Throughout this document, the solution provided in form of a complete architecture for navigation describes the modules involved and the importance of software connections between them, generation of trajectories, mapping, localization and low level control. Finally, the results section describes scenarios and vehicle/software performance in terms of CPU for each module involved and the generation of trajectories, maps and control commands needed to move the vehicle from one point to another.Este documento es el resultado de cinco años de trabajo en el campo de los vehículos sin conductor donde, en el, se recoge el desarrollo de una arquitectura software de control para la navegación de este tipo de vehículos. El objetivo es muy ambicioso ya que para su desarrollo ha sido necesario el conocimiento de múltiples disciplinas como ingeniería electrónica, ingeniería informática, ingeniería de control, procesamiento de señales, mecánica y visión por computador. A pesar del vasto conocimiento necesario para lograr un vehículo funcional, se han alcanzado soluciones para cada uno de los problemas en que consiste la navegación autónoma, generando un vehículo autogobernado que toma decisiones por si mismo para evitar obstáculos y alcanzar los puntos de destino deseados. Los problemas principales que los vehículos autónomos tienen que hacer frente, están basados en tres preguntas principales: donde estoy, donde tengo que ir y como voy. Para responder a estas tres preguntas se ha dividido la arquitectura en los módulos siguientes: localización, planificación, mapeado del entorno y control junto con módulos extra para dotar al sistema de mas aptitudes y mejor funcionamiento como por ejemplo la comunicación entre vehículos, peatones e infraestructuras, la interacción humano máquina, la gestión de tareas con múltiples vehículos o la propia consciencia del vehículo en cuanto a su estado de baterías, mantenimiento, sensores conectados o desconectados, etc. A través de este documento, la solución proporcionada a cada uno de los módulos involucrados refleja la importancia de las conexiones de software y la comunicación entre procesos dentro de la arquitectura ya sea para la generación de trayectorias, la creación de los mapas a tiempo, la localización precisa en el entorno, o los comandos generados para gobernar el vehículo. Así mismo, en el apartado de resultados se pone de manifiesto la importancia de cumplir los plazos de compartición de mensajes y optimizar el sistema para no sobrecargar la CPU.Programa Oficial de Doctorado en Ingeniería Eléctrica, Electrónica y AutomáticaPresidente: Felipe Jiménez Alonso.- Secretario: Agapito Ismael Ledezma Espino.- Vocal: Alessio Malizi

    Development of a light-based driver assistance system

    Get PDF
    [no abstract

    TalkyCars: A Distributed Software Platform for Cooperative Perception among Connected Autonomous Vehicles based on Cellular-V2X Communication

    Get PDF
    Autonomous vehicles are required to operate among highly mixed traffic during their early market-introduction phase, solely relying on local sensory with limited range. Exhaustively comprehending and navigating complex urban environments is potentially not feasible with sufficient reliability using the aforesaid approach. Addressing this challenge, intelligent vehicles can virtually increase their perception range beyond their line of sight by utilizing Vehicle-to-Everything (V2X) communication with surrounding traffic participants to perform cooperative perception. Since existing solutions face a variety of limitations, including lack of comprehensiveness, universality and scalability, this thesis aims to conceptualize, implement and evaluate an end-to-end cooperative perception system using novel techniques. A comprehensive yet extensible modeling approach for dynamic traffic scenes is proposed first, which is based on probabilistic entity-relationship models, accounts for uncertain environments and combines low-level attributes with high-level relational- and semantic knowledge in a generic way. Second, the design of a holistic, distributed software architecture based on edge computing principles is proposed as a foundation for multi-vehicle high-level sensor fusion. In contrast to most existing approaches, the presented solution is designed to rely on Cellular-V2X communication in 5G networks and employs geographically distributed fusion nodes as part of a client-server configuration. A modular proof-of-concept implementation is evaluated in different simulated scenarios to assess the system\u27s performance both qualitatively and quantitatively. Experimental results show that the proposed system scales adequately to meet certain minimum requirements and yields an average improvement in overall perception quality of approximately 27 %
    • …
    corecore