250 research outputs found

    Real-Time Lane Region Detection Using a Combination of Geometrical and Image Features

    Get PDF
    Over the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performanceOver the past few decades, pavement markings have played a key role in intelligent vehicle applications such as guidance, navigation, and control. However, there are still serious issues facing the problem of lane marking detection. For example, problems include excessive processing time and false detection due to similarities in color and edges between traffic signs (channeling lines, stop lines, crosswalk, arrows, etc.). This paper proposes a strategy to extract the lane marking information taking into consideration its features such as color, edge, and width, as well as the vehicle speed. Firstly, defining the region of interest is a critical task to achieve real-time performance. In this sense, the region of interest is dependent on vehicle speed. Secondly, the lane markings are detected by using a hybrid color-edge feature method along with a probabilistic method, based on distance-color dependence and a hierarchical fitting model. Thirdly, the following lane marking information is extracted: the number of lane markings to both sides of the vehicle, the respective fitting model, and the centroid information of the lane. Using these parameters, the region is computed by using a road geometric model. To evaluate the proposed method, a set of consecutive frames was used in order to validate the performanc

    Characterizing zebra crossing zones using LiDAR data

    Get PDF
    Light detection and ranging (LiDAR) scanning in urban environments leads to accurate and dense three-dimensional point clouds where the different elements in the scene can be precisely characterized. In this paper, two LiDAR-based algorithms that complement each other are proposed. The first one is a novel profiling method robust to noise and obstacles. It accurately characterizes the curvature, the slope, the height of the sidewalks, obstacles, and defects such as potholes. It was effective for 48 of 49 detected zebra crossings, even in the presence of pedestrians or vehicles in the crossing zone. The second one is a detailed quantitative summary of the state of the zebra crossing. It contains information about the location, the geometry, and the road marking. Coarse grain statistics are more prone to obstacle-related errors and are only fully reliable for 18 zebra crossings free from significant obstacles. However, all the anomalous statistics can be analyzed by looking at the associated profiles. The results can help in the maintenance of urban roads. More specifically, they can be used to improve the quality and safety of pedestrian routesConsellería de Cultura, Educación e Ordenación Universitaria, Grant/Award Numbers: accreditation 2019-2022 ED431G-2019/04, 2022-2024, ED431C2022/16, ED481A-2020/231; European Regional Development Fund (ERDF); CiTIUS-Research Center in Intelligent Technologies of the University of Santiago de Compostela as a Research Center of the Galician University System; Ministry of Economy and Competitiveness, Government of Spain, Grant/Award Number: PID2019-104834GB-I00; National Department of Traffic (DGT) through the project Analysis of Indicators Big-Geodata on Urban Roads for the Dynamic Design of Safe School Roads, Grant/Award Number: SPIP2017-02340S

    Challenges in Partially-Automated Roadway Feature Mapping Using Mobile Laser Scanning and Vehicle Trajectory Data

    Get PDF
    Connected vehicle and driver's assistance applications are greatly facilitated by Enhanced Digital Maps (EDMs) that represent roadway features (e.g., lane edges or centerlines, stop bars). Due to the large number of signalized intersections and miles of roadway, manual development of EDMs on a global basis is not feasible. Mobile Terrestrial Laser Scanning (MTLS) is the preferred data acquisition method to provide data for automated EDM development. Such systems provide an MTLS trajectory and a point cloud for the roadway environment. The challenge is to automatically convert these data into an EDM. This article presents a new processing and feature extraction method, experimental demonstration providing SAE-J2735 map messages for eleven example intersections, and a discussion of the results that points out remaining challenges and suggests directions for future research.Comment: 6 pages, 5 figure

    Pedestrian Behavior Study to Advance Pedestrian Safety in Smart Transportation Systems Using Innovative LiDAR Sensors

    Get PDF
    Pedestrian safety is critical to improving walkability in cities. Although walking trips have increased in the last decade, pedestrian safety remains a top concern. In 2020, 6,516 pedestrians were killed in traffic crashes, representing the most deaths since 1990 (NHTSA, 2020). Approximately 15% of these occurred at signalized intersections where a variety of modes converge, leading to the increased propensity of conflicts. Current signal timing and detection technologies are heavily biased towards vehicular traffic, often leading to higher delays and insufficient walk times for pedestrians, which could result in risky behaviors such as noncompliance. Current detection systems for pedestrians at signalized intersections consist primarily of push buttons. Limitations include the inability to provide feedback to the pedestrian that they have been detected, especially with older devices, and not being able to dynamically extend the walk times if the pedestrians fail to clear the crosswalk. Smart transportation systems play a vital role in enhancing mobility and safety and provide innovative techniques to connect pedestrians, vehicles, and infrastructure. Most research on smart and connected technologies is focused on vehicles; however, there is a critical need to harness the power of these technologies to study pedestrian behavior, as pedestrians are the most vulnerable users of the transportation system. While a few studies have used location technologies to detect pedestrians, this coverage is usually small and favors people with smartphones. However, the transportation system must consider a full spectrum of pedestrians and accommodate everyone. In this research, the investigators first review the previous studies on pedestrian behavior data and sensing technologies. Then the research team developed a pedestrian behavioral data collecting system based on the emerging LiDAR sensors. The system was deployed at two signalized intersections. Two studies were conducted: (a) pedestrian behaviors study at signalized intersections, analyzing the pedestrian waiting time before crossing, generalized perception-reaction time to WALK sign and crossing speed; and (b) a novel dynamic flashing yellow arrow (D-FYA) solution to separate permissive left-turn vehicles from concurrent crossing pedestrians. The results reveal that the pedestrian behaviors may have evolved compared with the recommended behaviors in the pedestrian facility design guideline (e.g., AASHTO’s “Green Book”). The D-FYA solution was also evaluated on the cabinet-in-theloop simulation platform and the improvements were promising. The findings in this study will advance the body of knowledge on equitable traffic safety, especially for pedestrian safety in the future

    Perception and intelligent localization for autonomous driving

    Get PDF
    Mestrado em Engenharia de Computadores e TelemáticaVisão por computador e fusão sensorial são temas relativamente recentes, no entanto largamente adoptados no desenvolvimento de robôs autónomos que exigem adaptabilidade ao seu ambiente envolvente. Esta dissertação foca-se numa abordagem a estes dois temas para alcançar percepção no contexto de condução autónoma. O uso de câmaras para atingir este fim é um processo bastante complexo. Ao contrário dos meios sensoriais clássicos que fornecem sempre o mesmo tipo de informação precisa e atingida de forma determinística, as sucessivas imagens adquiridas por uma câmara estão repletas da mais variada informação e toda esta ambígua e extremamente difícil de extrair. A utilização de câmaras como meio sensorial em robótica é o mais próximo que chegamos na semelhança com aquele que é o de maior importância no processo de percepção humana, o sistema de visão. Visão por computador é uma disciplina científica que engloba àreas como: processamento de sinal, inteligência artificial, matemática, teoria de controlo, neurobiologia e física. A plataforma de suporte ao estudo desenvolvido no âmbito desta dissertação é o ROTA (RObô Triciclo Autónomo) e todos os elementos que consistem o seu ambiente. No contexto deste, são descritas abordagens que foram introduzidas com fim de desenvolver soluções para todos os desafios que o robô enfrenta no seu ambiente: detecção de linhas de estrada e consequente percepção desta, detecção de obstáculos, semáforos, zona da passadeira e zona de obras. É também descrito um sistema de calibração e aplicação da remoção da perspectiva da imagem, desenvolvido de modo a mapear os elementos percepcionados em distâncias reais. Em consequência do sistema de percepção, é ainda abordado o desenvolvimento de auto-localização integrado numa arquitectura distribuída incluindo navegação com planeamento inteligente. Todo o trabalho desenvolvido no decurso da dissertação é essencialmente centrado no desenvolvimento de percepção robótica no contexto de condução autónoma.Computer vision and sensor fusion are subjects that are quite recent, however widely adopted in the development of autonomous robots that require adaptability to their surrounding environment. This thesis gives an approach on both in order to achieve perception in the scope of autonomous driving. The use of cameras to achieve this goal is a rather complex subject. Unlike the classic sensorial devices that provide the same type of information with precision and achieve this in a deterministic way, the successive images acquired by a camera are replete with the most varied information, that this ambiguous and extremely dificult to extract. The use of cameras for robotic sensing is the closest we got within the similarities with what is of most importance in the process of human perception, the vision system. Computer vision is a scientific discipline that encompasses areas such as signal processing, artificial intelligence, mathematics, control theory, neurobiology and physics. The support platform in which the study within this thesis was developed, includes ROTA (RObô Triciclo Autónomo) and all elements comprising its environment. In its context, are described approaches that introduced in the platform in order to develop solutions for all the challenges facing the robot in its environment: detection of lane markings and its consequent perception, obstacle detection, trafic lights, crosswalk and road maintenance area. It is also described a calibration system and implementation for the removal of the image perspective, developed in order to map the elements perceived in actual real world distances. As a result of the perception system development, it is also addressed self-localization integrated in a distributed architecture that allows navigation with long term planning. All the work developed in the course of this work is essentially focused on robotic perception in the context of autonomous driving

    Autonomous Pedestrian Detection in Transit Buses

    Get PDF
    This project created a proof of concept for an automated pedestrian detection and avoidance system designed for transit buses. The system detects objects up to 12 meters away, calculates the distance from the system using a solid-state LIDAR, and determines if that object is human by passive infrared. This triggers a visual and sound warning. A Xilinx Zynq-SoC utilizing programmable logic and an ARM-based processing system drive data fusion, and an external power unit makes it configurable for transit-buses

    Pedestrian Behavior Study to Advance Pedestrian Safety in Smart Transportation Systems Using Innovative LiDAR Sensors

    Get PDF
    69A3551747112Pedestrian safety is critical to improving walkability in cities. Although walking trips have increased in the last decade, pedestrian safety remains a top concern. In 2020, 6,516 pedestrians were killed in traffic crashes, representing the most deaths since 1990 (NHTSA, 2020). Approximately 15% of these occurred at signalized intersections where a variety of modes converge, leading to the increased propensity of conflicts. Current signal timing and detection technologies are heavily biased towards vehicular traffic, often leading to higher delays and insufficient walk times for pedestrians, which could result in risky behaviors such as noncompliance. Current detection systems for pedestrians at signalized intersections consist primarily of push buttons. Limitations include the inability to provide feedback to the pedestrian that they have been detected, especially with older devices, and not being able to dynamically extend the walk times if the pedestrians fail to clear the crosswalk. Smart transportation systems play a vital role in enhancing mobility and safety and provide innovative techniques to connect pedestrians, vehicles, and infrastructure. Most research on smart and connected technologies is focused on vehicles; however, there is a critical need to harness the power of these technologies to study pedestrian behavior, as pedestrians are the most vulnerable users of the transportation system. While a few studies have used location technologies to detect pedestrians, this coverage is usually small and favors people with smartphones. However, the transportation system must consider a full spectrum of pedestrians and accommodate everyone. In this research, the investigators first review the previous studies on pedestrian behavior data and sensing technologies. Then the research team developed a pedestrian behavioral data collecting system based on the emerging LiDAR sensors. The system was deployed at two signalized intersections. Two studies were conducted: (a) pedestrian behaviors study at signalized intersections, analyzing the pedestrian waiting time before crossing, generalized perception-reaction time to WALK sign and crossing speed; and (b) a novel dynamic flashing yellow arrow (D-FYA) solution to separate permissive left-turn vehicles from concurrent crossing pedestrians. The results reveal that the pedestrian behaviors may have evolved compared with the recommended behaviors in the pedestrian facility design guideline (e.g., AASHTO\u2019s \u201cGreen Book\u201d). The D-FYA solution was also evaluated on the cabinet-in-the-loop simulation platform and the improvements were promising. The findings in this study will advance the body of knowledge on equitable traffic safety, especially for pedestrian safety in the future

    Detection and Tracking of Pedestrians Using Doppler LiDAR

    Get PDF
    Pedestrian detection and tracking is necessary for autonomous vehicles and traffic manage- ment. This paper presents a novel solution to pedestrian detection and tracking for urban scenarios based on Doppler LiDAR that records both the position and velocity of the targets. The workflow consists of two stages. In the detection stage, the input point cloud is first segmented to form clus- ters, frame by frame. A subsequent multiple pedestrian separation process is introduced to further segment pedestrians close to each other. While a simple speed classifier is capable of extracting most of the moving pedestrians, a supervised machine learning-based classifier is adopted to detect pedestrians with insignificant radial velocity. In the tracking stage, the pedestrian’s state is estimated by a Kalman filter, which uses the speed information to estimate the pedestrian’s dynamics. Based on the similarity between the predicted and detected states of pedestrians, a greedy algorithm is adopted to associate the trajectories with the detection results. The presented detection and tracking methods are tested on two data sets collected in San Francisco, California by a mobile Doppler LiDAR system. The results of the pedestrian detection demonstrate that the proposed two-step classifier can improve the detection performance, particularly for detecting pedestrians far from the sensor. For both data sets, the use of Doppler speed information improves the F1-score and the recall by 15% to 20%. The subsequent tracking from the Kalman filter can achieve 83.9–55.3% for the multiple object tracking accuracy (MOTA), where the contribution of the speed measurements is secondary and insignificant
    corecore