1,668 research outputs found

    A Vision-Based Driver Nighttime Assistance and Surveillance System Based on Intelligent Image Sensing Techniques and a Heterogamous Dual-Core Embedded System Architecture

    Get PDF
    This study proposes a vision-based intelligent nighttime driver assistance and surveillance system (VIDASS system) implemented by a set of embedded software components and modules, and integrates these modules to accomplish a component-based system framework on an embedded heterogamous dual-core platform. Therefore, this study develops and implements computer vision and sensing techniques of nighttime vehicle detection, collision warning determination, and traffic event recording. The proposed system processes the road-scene frames in front of the host car captured from CCD sensors mounted on the host vehicle. These vision-based sensing and processing technologies are integrated and implemented on an ARM-DSP heterogamous dual-core embedded platform. Peripheral devices, including image grabbing devices, communication modules, and other in-vehicle control devices, are also integrated to form an in-vehicle-embedded vision-based nighttime driver assistance and surveillance system

    Provident vehicle detection at night for advanced driver assistance systems

    Get PDF
    In recent years, computer vision algorithms have become more powerful, which enabled technologies such as autonomous driving to evolve rapidly. However, current algorithms mainly share one limitation: They rely on directly visible objects. This is a significant drawback compared to human behavior, where visual cues caused by objects (e. g., shadows) are already used intuitively to retrieve information or anticipate occurring objects. While driving at night, this performance deficit becomes even more obvious: Humans already process the light artifacts caused by the headlamps of oncoming vehicles to estimate where they appear, whereas current object detection systems require that the oncoming vehicle is directly visible before it can be detected. Based on previous work on this subject, in this paper, we present a complete system that can detect light artifacts caused by the headlights of oncoming vehicles so that it detects that a vehicle is approaching providently (denoted as provident vehicle detection). For that, an entire algorithm architecture is investigated, including the detection in the image space, the three-dimensional localization, and the tracking of light artifacts. To demonstrate the usefulness of such an algorithm, the proposed algorithm is deployed in a test vehicle to use the detected light artifacts to control the glare-free high beam system proactively (react before the oncoming vehicle is directly visible). Using this experimental setting, the provident vehicle detection system’s time benefit compared to an in-production computer vision system is quantified. Additionally, the glare-free high beam use case provides a real-time and real-world visualization interface of the detection results by considering the adaptive headlamps as projectors. With this investigation of provident vehicle detection, we want to put awareness on the unconventional sensing task of detecting objects providently (detection based on observable visual cues the objects cause before they are visible) and further close the performance gap between human behavior and computer vision algorithms to bring autonomous and automated driving a step forward

    An algorithm for accurate taillight detection at night

    Get PDF
    Vehicle detection is an important process of many advance driver assistance system (ADAS) such as forward collision avoidance, Time to collision (TTC) and Intelligence headlight control (IHC). This paper presents a new algorithm to detect a vehicle ahead by using taillight pair. First, the proposed method extracts taillight candidate regions by filtering taillight colour regions and applying morphological operations. Second, pairing each candidates and pair symmetry analysis steps are implemented in order to have taillight positions. The aim of this work is to improve the accuracy of taillight detection at night with many bright spot candidates from streetlamps and other factors from complex scenes. Experiments on still images dataset show that the proposed algorithm can improve the taillight detection accuracy rate and robust under limited light images

    A system for traffic violation detection

    Get PDF
    This paper describes the framework and components of an experimental platform for an advanced driver assistance system (ADAS) aimed at providing drivers with a feedback about traffic violations they have committed during their driving. The system is able to detect some specific traffic violations, record data associated to these faults in a local data-base, and also allow visualization of the spatial and temporal information of these traffic violations in a geographical map using the standard Google Earth tool. The test-bed is mainly composed of two parts: a computer vision subsystem for traffic sign detection and recognition which operates during both day and nighttime, and an event data recorder (EDR) for recording data related to some specific traffic violations. The paper covers firstly the description of the hardware architecture and then presents the policies used for handling traffic violations

    Driving in the Rain: A Survey toward Visibility Estimation through Windshields

    Get PDF
    Rain can significantly impair the driver’s sight and affect his performance when driving in wet conditions. Evaluation of driver visibility in harsh weather, such as rain, has garnered considerable research since the advent of autonomous vehicles and the emergence of intelligent transportation systems. In recent years, advances in computer vision and machine learning led to a significant number of new approaches to address this challenge. However, the literature is fragmented and should be reorganised and analysed to progress in this field. There is still no comprehensive survey article that summarises driver visibility methodologies, including classic and recent data-driven/model-driven approaches on the windshield in rainy conditions, and compares their generalisation performance fairly. Most ADAS and AD systems are based on object detection. Thus, rain visibility plays a key role in the efficiency of ADAS/AD functions used in semi- or fully autonomous driving. This study fills this gap by reviewing current state-of-the-art solutions in rain visibility estimation used to reconstruct the driver’s view for object detection-based autonomous driving. These solutions are classified as rain visibility estimation systems that work on (1) the perception components of the ADAS/AD function, (2) the control and other hardware components of the ADAS/AD function, and (3) the visualisation and other software components of the ADAS/AD function. Limitations and unsolved challenges are also highlighted for further research
    corecore