636 research outputs found

    Identification, Calculation and Warning of Horizontal Curves for Low-volume Two-lane Roadways Using Smartphone Sensors

    Get PDF
    Smartphones and other portable personal devices that integrate global positioning systems, Bluetooth Low Energy, and advanced computing technologies have become more accessible due to affordable prices, product innovation, and people’s desire to be connected. As more people own these devices, there are greater opportunities for data acquisition in Intelligent Transportation Systems, and for vehicle-to-infrastructure communication. Horizontal curves are a common factor in the number of observed roadway crashes. Identifying locations and geometric characteristics of the horizontal curves plays a critical role in crash prediction and prevention, and timely curve warnings save lives. However, most states in the US face a challenge to maintain detailed and highquality roadway inventory databases for low volume rural roads due to the laborintensive and time-consuming nature of collecting and maintaining the data. This thesis proposes two smartphone applications C-Finder and C-Alert, to collect two-lane road horizontal curves data (including radius, superelevation, length, etc.), collect this data for transportation agencies (providing a low-cost alternative to mobile asset data collection vehicles), and for warning drivers of sharp horizontal curves, respectively. C-Finder is capable of accurately detecting horizontal curves by exploiting an unsupervised K-means machine learning technique. Butterworth low pass filtering was applied to reduce sensor noise. Extended Kalman filtering was adopted to improve GPS accuracy. Chord method-based radius computation, and superelevation estimation were introduced to achieve accurate and robust results despite of the low-frequency GPS and noisy sensor signals obtained from the smartphone. C-Alert applies BLE technology and a head-up display (HUD) to track driver speed and compare vehicle position with curve locations in a real-time fashion. Messages can be wirelessly communicated from the smartphone to a receiving unit through BLE technology, and then displayed by HUD on the vehicle’s front windshield. The field test demonstrated that C-Finder achieves high curve identification accuracy, reasonable accuracy for calculating curve radius and superelevation compared to the previous road survey studies, and C-Alert indicates relatively high accuracy for speeding warning when approaching sharp curves

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    EFFECT OF SENSOR ERRORS ON AUTONOMOUS STEERING CONTROL AND APPLICATION OF SENSOR FUSION FOR ROBUST NAVIGATION

    Get PDF
    Autonomous steering control is one the most important features in autonomous vehicle navigation. The nature and tuning of the controller decides how well the vehicle follows a defined trajectory. A poorly tuned controller can cause the vehicle to oversteer or understeer at turns leading to deviation from a defined path. However, controller performance also depends on the state–feedback system. If the states used for controller input are noisy or has bias / systematic error, the navigation performance of the vehicle is affected irrespective of the control law and controller tuning. In this report, autonomous steering controller analysis is done for different kinds of sensor errors and the application of sensor fusion using Kalman Filters is discussed. Model-in-the-loop (MIL) simulation provides an efficient way for developing and performing controller analysis and implementing various fusion algorithms. Matlab/Simulink was used for this Model Based Development. Firstly, through experimentation the path tracking performance of the controller was analyzed followed by data collection for sensor, actuator and vehicle modelling. Then, the plant, actuator and controllers were modelled followed by the comparison of the results for ideal and non-ideal sensors. After analyzing the effects of sensor error on controller and vehicle performance, a solution was proposed using 1D-Kalman Filter (KF) based sensor fusion technique. It is seen that the waypoint tracking under 1D condition is improved to centimeter level and the steering response is also smoothened due to less noisy vehicle heading estimation

    Location tracking in indoor and outdoor environments based on the viterbi principle

    Get PDF

    Switching Trackers for Effective Sensor Fusion in Advanced Driver Assistance Systems

    Get PDF
    Modern cars utilise Advanced Driver Assistance Systems (ADAS) in several ways. In ADAS, the use of multiple sensors to gauge the environment surrounding the ego-vehicle offers numerous advantages, as fusing information from more than one sensor helps to provide highly reliable and error-free data. The fused data is typically then fed to a tracker algorithm, which helps to reduce noise and compensate for situations when received sensor data is temporarily absent or spurious, or to counter the offhand false positives and negatives. The performances of these constituent algorithms vary vastly under different scenarios. In this paper, we focus on the variation in the performance of tracker algorithms in sensor fusion due to the alteration in external conditions in different scenarios, and on the methods for countering that variation. We introduce a sensor fusion architecture, where the tracking algorithm is spontaneously switched to achieve the utmost performance under all scenarios. By employing a Real-time Traffic Density Estimation (RTDE) technique, we may understand whether the ego-vehicle is currently in dense or sparse traffic conditions. A highly dense traffic (or congested traffic) condition would mean that external circumstances are non-linear; similarly, sparse traffic conditions would mean that the probability of linear external conditions would be higher. We also employ a Traffic Sign Recognition (TSR) algorithm, which is able to monitor for construction zones, junctions, schools, and pedestrian crossings, thereby identifying areas which have a high probability of spontaneous, on-road occurrences. Based on the results received from the RTDE and TSR algorithms, we construct a logic which switches the tracker of the fusion architecture between an Extended Kalman Filter (for linear external scenarios) and an Unscented Kalman Filter (for non-linear scenarios). This ensures that the fusion model always uses the tracker that is best suited for its current needs, thereby yielding consistent accuracy across multiple external scenarios, compared to the fusion models that employ a fixed single tracker

    Intelligent automatic overtaking system using vision for vehicle detection

    Get PDF
    There is clear evidence that investment in intelligent transportation system technologies brings major social and economic benefits. Technological advances in the area of automatic systems in particular are becoming vital for the reduction of road deaths. We here describe our approach to automation of one the riskiest autonomous manœuvres involving vehicles – overtaking. The approach is based on a stereo vision system responsible for detecting any preceding vehicle and triggering the autonomous overtaking manœuvre. To this end, a fuzzy-logic based controller was developed to emulate how humans overtake. Its input is information from the vision system and from a positioning-based system consisting of a differential global positioning system (DGPS) and an inertial measurement unit (IMU). Its output is the generation of action on the vehicle’s actuators, i.e., the steering wheel and throttle and brake pedals. The system has been incorporated into a commercial Citroën car and tested on the private driving circuit at the facilities of our research center, CAR, with different preceding vehicles – a motorbike, car, and truck – with encouraging results

    Uses and Challenges of Collecting LiDAR Data from a Growing Autonomous Vehicle Fleet: Implications for Infrastructure Planning and Inspection Practices

    Get PDF
    Autonomous vehicles (AVs) that utilize LiDAR (Light Detection and Ranging) and other sensing technologies are becoming an inevitable part of transportation industry. Concurrently, transportation agencies are increasingly challenged with the management and tracking of large-scale highway asset inventory. LiDAR has become popular among transportation agencies for highway asset management given its advantage over traditional surveying methods. The affordability of LiDAR technology is increasing day by day. Given this, there will be substantial challenges and opportunities for the utilization of big data resulting from the growth of AVs with LiDAR. A proper understanding of the data size generated from this technology will help agencies in making decisions regarding storage, management, and transmission of the data. The original raw data generated from the sensor shrinks a lot after filtering and processing following the Cache county Road Manual and storing into ASPRS recommended (.las) file format. In this pilot study, it is found that while considering the road centerline as the vehicle trajectory larger portion of the data fall into the right of way section compared to the actual vehicle trajectory in Cache County, UT. And there is a positive relation between the data size and vehicle speed in terms of the travel lanes section given the nature of the selected highway environment

    Overview of Environment Perception for Intelligent Vehicles

    Get PDF
    This paper presents a comprehensive literature review on environment perception for intelligent vehicles. The state-of-the-art algorithms and modeling methods for intelligent vehicles are given, with a summary of their pros and cons. A special attention is paid to methods for lane and road detection, traffic sign recognition, vehicle tracking, behavior analysis, and scene understanding. In addition, we provide information about datasets, common performance analysis, and perspectives on future research directions in this area

    Robust ego-localization using monocular visual odometry

    Get PDF
    corecore