2,261 research outputs found

    Embarking on the Autonomous Journey: A Strikingly Engineered Car Control System Design

    Get PDF
    openThis thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience.This thesis develops an autonomous car control system with Raspberry Pi. Two predictive models are implemented: a convolutional neural network (CNN) using machine learning and an input-based decision tree model using sensor data. The Raspberry Module controls the car hardware and acquires real-time camera data with OpenCV. A dedicated web server and event stream processor process data in real-time using the trained neural network model, facilitating real-time decision-making. Unity and Meta Quest 2 VR set create the VR interface, while a generic DIY kit from Amazon and Raspberry PI provide the car hardware inputs. This research demonstrates the potential of VR in automotive communication, enhancing autonomous car testing and user experience

    Stereoscopic vision in vehicle navigation.

    Get PDF
    Traffic sign (TS) detection and tracking is one of the main tasks of an autonomous vehicle which is addressed in the field of computer vision. An autonomous vehicle must have vision based recognition of the road to follow the rules like every other vehicle on the road. Besides, TS detection and tracking can be used to give feedbacks to the driver. This can significantly increase safety in making driving decisions. For a successful TS detection and tracking changes in weather and lighting conditions should be considered. Also, the camera is in motion, which results in image distortion and motion blur. In this work a fast and robust method is proposed for tracking the stop signs in videos taken with stereoscopic cameras that are mounted on the car. Using camera parameters and the detected sign, the distance between the stop sign and the vehicle is calculated. This calculated distance can be widely used in building visual driver-assistance systems

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    Exploring the challenges and opportunities of image processing and sensor fusion in autonomous vehicles: A comprehensive review

    Get PDF
    Autonomous vehicles are at the forefront of future transportation solutions, but their success hinges on reliable perception. This review paper surveys image processing and sensor fusion techniques vital for ensuring vehicle safety and efficiency. The paper focuses on object detection, recognition, tracking, and scene comprehension via computer vision and machine learning methodologies. In addition, the paper explores challenges within the field, such as robustness in adverse weather conditions, the demand for real-time processing, and the integration of complex sensor data. Furthermore, we examine localization techniques specific to autonomous vehicles. The results show that while substantial progress has been made in each subfield, there are persistent limitations. These include a shortage of comprehensive large-scale testing, the absence of diverse and robust datasets, and occasional inaccuracies in certain studies. These issues impede the seamless deployment of this technology in real-world scenarios. This comprehensive literature review contributes to a deeper understanding of the current state and future directions of image processing and sensor fusion in autonomous vehicles, aiding researchers and practitioners in advancing the development of reliable autonomous driving systems

    OCR-RTPS: An OCR-based real-time positioning system for the valet parking

    Full text link
    Obtaining the position of ego-vehicle is a crucial prerequisite for automatic control and path planning in the field of autonomous driving. Most existing positioning systems rely on GPS, RTK, or wireless signals, which are arduous to provide effective localization under weak signal conditions. This paper proposes a real-time positioning system based on the detection of the parking numbers as they are unique positioning marks in the parking lot scene. It does not only can help with the positioning with open area, but also run independently under isolation environment. The result tested on both public datasets and self-collected dataset show that the system outperforms others in both performances and applies in practice. In addition, the code and dataset will release later.Comment: 25 pages, 9 figure

    Advances in Intelligent Vehicle Control

    Get PDF
    This book is a printed edition of the Special Issue Advances in Intelligent Vehicle Control that was published in the journal Sensors. It presents a collection of eleven papers that covers a range of topics, such as the development of intelligent control algorithms for active safety systems, smart sensors, and intelligent and efficient driving. The contributions presented in these papers can serve as useful tools for researchers who are interested in new vehicle technology and in the improvement of vehicle control systems

    Computer Vision Algorithms for Mobile Camera Applications

    Get PDF
    Wearable and mobile sensors have found widespread use in recent years due to their ever-decreasing cost, ease of deployment and use, and ability to provide continuous monitoring as opposed to sensors installed at fixed locations. Since many smart phones are now equipped with a variety of sensors, including accelerometer, gyroscope, magnetometer, microphone and camera, it has become more feasible to develop algorithms for activity monitoring, guidance and navigation of unmanned vehicles, autonomous driving and driver assistance, by using data from one or more of these sensors. In this thesis, we focus on multiple mobile camera applications, and present lightweight algorithms suitable for embedded mobile platforms. The mobile camera scenarios presented in the thesis are: (i) activity detection and step counting from wearable cameras, (ii) door detection for indoor navigation of unmanned vehicles, and (iii) traffic sign detection from vehicle-mounted cameras. First, we present a fall detection and activity classification system developed for embedded smart camera platform CITRIC. In our system, the camera platform is worn by the subject, as opposed to static sensors installed at fixed locations in certain rooms, and, therefore, monitoring is not limited to confined areas, and extends to wherever the subject may travel including indoors and outdoors. Next, we present a real-time smart phone-based fall detection system, wherein we implement camera and accelerometer based fall-detection on Samsung Galaxy S™ 4. We fuse these two sensor modalities to have a more robust fall detection system. Then, we introduce a fall detection algorithm with autonomous thresholding using relative-entropy within the class of Ali-Silvey distance measures. As another wearable camera application, we present a footstep counting algorithm using a smart phone camera. This algorithm provides more accurate step-count compared to using only accelerometer data in smart phones and smart watches at various body locations. As a second mobile camera scenario, we study autonomous indoor navigation of unmanned vehicles. A novel approach is proposed to autonomously detect and verify doorway openings by using the Google Project Tango™ platform. The third mobile camera scenario involves vehicle-mounted cameras. More specifically, we focus on traffic sign detection from lower-resolution and noisy videos captured from vehicle-mounted cameras. We present a new method for accurate traffic sign detection, incorporating Aggregate Channel Features and Chain Code Histograms, with the goal of providing much faster training and testing, and comparable or better performance, with respect to deep neural network approaches, without requiring specialized processors. Proposed computer vision algorithms provide promising results for various useful applications despite the limited energy and processing capabilities of mobile devices
    corecore