2,268 research outputs found

    Saliency difference based objective evaluation method for a superimposed screen of the HUD with various background

    Full text link
    The head-up display (HUD) is an emerging device which can project information on a transparent screen. The HUD has been used in airplanes and vehicles, and it is usually placed in front of the operator's view. In the case of the vehicle, the driver can see not only various information on the HUD but also the backgrounds (driving environment) through the HUD. However, the projected information on the HUD may interfere with the colors in the background because the HUD is transparent. For example, a red message on the HUD will be less noticeable when there is an overlap between it and the red brake light from the front vehicle. As the first step to solve this issue, how to evaluate the mutual interference between the information on the HUD and backgrounds is important. Therefore, this paper proposes a method to evaluate the mutual interference based on saliency. It can be evaluated by comparing the HUD part cut from a saliency map of a measured image with the HUD image.Comment: 10 pages, 5 fighres, 1 table, accepted by IFAC-HMS 201

    Brake Light Detection Algorithm for Predictive Braking

    Get PDF
    There has recently been a rapid increase in the number of partially automated systems in passenger vehicles. This has necessitated a greater focus on the effect the systems have on the comfort and trust of passengers. One significant issue is the delayed detection of stationary or harshly braking vehicles. This paper proposes a novel brake light detection algorithm in order to improve ride comfort. The system uses a camera and YOLOv3 object detector to detect the bounding boxes of the vehicles ahead of the ego vehicle. The bounding boxes are preprocessed with L*a*b colorspace thresholding. Thereafter, the bounding boxes are resized to a 30 × 30 pixel resolution and fed into a random forest algorithm. The novel detection system was evaluated using a dataset collected in the Helsinki metropolitan area in varying conditions. Carried out experiments revealed that the new algorithm reaches a high accuracy of 81.8%. For comparison, using the random forest algorithm alone produced an accuracy of 73.4%, thus proving the value of the preprocessing stage. Furthermore, a range test was conducted. It was found that with a suitable camera, the algorithm can reliably detect lit brake lights even up to a distance of 150 m

    Brake Light Detection Algorithm for Predictive Braking

    Get PDF
    There has recently been a rapid increase in the number of partially automated systems in passenger vehicles. This has necessitated a greater focus on the effect the systems have on the comfort and trust of passengers. One significant issue is the delayed detection of stationary or harshly braking vehicles. This paper proposes a novel brake light detection algorithm in order to improve ride comfort. The system uses a camera and YOLOv3 object detector to detect the bounding boxes of the vehicles ahead of the ego vehicle. The bounding boxes are preprocessed with L*a*b colorspace thresholding. Thereafter, the bounding boxes are resized to a 30 × 30 pixel resolution and fed into a random forest algorithm. The novel detection system was evaluated using a dataset collected in the Helsinki metropolitan area in varying conditions. Carried out experiments revealed that the new algorithm reaches a high accuracy of 81.8%. For comparison, using the random forest algorithm alone produced an accuracy of 73.4%, thus proving the value of the preprocessing stage. Furthermore, a range test was conducted. It was found that with a suitable camera, the algorithm can reliably detect lit brake lights even up to a distance of 150 m

    Nighttime Driver Behavior Prediction Using Taillight Signal Recognition via CNN-SVM Classifier

    Full text link
    This paper aims to enhance the ability to predict nighttime driving behavior by identifying taillights of both human-driven and autonomous vehicles. The proposed model incorporates a customized detector designed to accurately detect front-vehicle taillights on the road. At the beginning of the detector, a learnable pre-processing block is implemented, which extracts deep features from input images and calculates the data rarity for each feature. In the next step, drawing inspiration from soft attention, a weighted binary mask is designed that guides the model to focus more on predetermined regions. This research utilizes Convolutional Neural Networks (CNNs) to extract distinguishing characteristics from these areas, then reduces dimensions using Principal Component Analysis (PCA). Finally, the Support Vector Machine (SVM) is used to predict the behavior of the vehicles. To train and evaluate the model, a large-scale dataset is collected from two types of dash-cams and Insta360 cameras from the rear view of Ford Motor Company vehicles. This dataset includes over 12k frames captured during both daytime and nighttime hours. To address the limited nighttime data, a unique pixel-wise image processing technique is implemented to convert daytime images into realistic night images. The findings from the experiments demonstrate that the proposed methodology can accurately categorize vehicle behavior with 92.14% accuracy, 97.38% specificity, 92.09% sensitivity, 92.10% F1-measure, and 0.895 Cohen's Kappa Statistic. Further details are available at https://github.com/DeepCar/Taillight_Recognition.Comment: 12 pages, 10 figure
    corecore