47 research outputs found

    The Alleviation of Perceptual Blindness During Driving in Urban Areas Guided by Saccades Recommendation

    Get PDF
    In advanced industrial applications, computational visual attention models (CVAMs) could predict visual attention very similarly to actual human attention allocation. This has been used as a very important component of technology in advanced driver assistance systems (ADAS). Given that the biological inspiration of the driving-related CVAMs could be obtained from skilled drivers in complex driving conditions, in which the driver’s attention is constantly directed at various salient and informative visual stimuli by alternating the eye fixations via saccades to drive safely, this paper proposes a saccade recommendation strategy to enhance the driving safety under urban road environment, particularly when the driver’s vision is often impaired by the visual crowding. The altered and directed saccades are collected and optimized by extracting four innate features from human dynamic vision. A neural network is designed to classify preferable saccades to reduce perceptual blindness due to visual crowding under urban scenes. A state-of- the-art CVAM is firstly adopted to localize the predicted eye fixation locations (EFLs) in driving video clips. Besides, human subjects’ gaze at the recommended EFLs is measured via an eye-tracker. The time delays between the predicted EFLs and drivers’ EFLs are analyzed under different driving conditions, followed by the time delays between the predicted EFLs and the driver’s hand control. The visually safe margin is then measured by mediating the driving speed and the total delay. Experimental results demonstrate that the recommended saccades can effectively reduce the amount of perceptual blindness, which is known to be of help to further improve road driving safety

    Bio-inspired vision mimetics towards next generation collision avoidance automation

    Get PDF
    The current “deep learning + large-scale data + strong supervised labeling” technology framework of collision avoidance for ground robots and aerial drones is becoming saturated. Its development gradually faces challenges from real open-scene applications, including small data, weak annotation, and cross-scene. Inspired by the neural structure and processes underlying human cognition (e.g., human visual, auditory, and tactile systems) and the knowledge learned from daily driving tasks, such as, a high-level cognitive system is developed for integrating collision sensing and collision avoidance. This bio-inspired cognitive approach takes the advantages of good robustness, high self-adaptability, and low computation consumption in practical driving scenes

    Human-Factors-in-Driving-Loop: Driver Identification and Verification via a Deep Learning Approach using Psychological Behavioral Data

    Get PDF
    Driver identification has been popular in the field of driving behavior analysis, which has a broad range of applications in anti-thief, driving style recognition, insurance strategy, and fleet management. However, most studies to date have only researched driver identification without a robust verification stage. This paper addresses driver identification and verification through a deep learning (DL) approach using psychological behavioral data, i.e., vehicle control operation data and eye movement data collected from a driving simulator and an eye tracker, respectively. We design an architecture that analyzes the segmentation windows of three-second data to capture unique driving characteristics and then differentiate drivers on that basis. The proposed model includes a fully convolutional network (FCN) and a squeeze-and-excitation (SE) block. Experimental results were obtained from 24 human participants driving in 12 different scenarios. The proposed driver identification system achieves an accuracy of 99.60% out of 15 drivers. To tackle driver verification, we combine the proposed architecture and a Siamese neural network, and then map all behavioral data into two embedding layers for similarity computation. The identification system achieves significant performance with average precision of 96.91%, recall of 95.80%, F1 score of 96.29%, and accuracy of 96.39%, respectively. Importantly, we scale out the verification system to imposter detection and achieve an average verification accuracy of 90.91%. These results imply the invariable characteristics from human factors rather than other traditional resources, which provides a superior solution for driving behavior authentication systems

    Eye Fixation Location Recommendation in Advanced Driver Assistance System

    No full text
    Recent research progress on the approach of visual attention modeling for mediated perception to advanced driver assistance system (ADAS) has drawn the attention of computer and human vision researchers. However, it is still debatable whether the actual driver’s eye fixation locations (EFLs) or the predicted EFLs which are calculated by computational visual attention models (CVAMs) are more reliable for safe driving under real-life driving conditions. We analyzed the suitability of the following two EFLs using ten typical categories of natural driving video clips: the EFLs of human drivers and the EFLs predicted by CVAMs. In the suitability analysis, we used the EFLs confirmed by two experienced drivers as the reference EFLs. We found that both approaches alone are not suitable for safe driving and EFL suitable for safe driving depends on the driving conditions. Based on this finding, we propose a novel strategy for recommending one of the EFLs to the driver for ADAS under predefined 10 real-life driving conditions. We propose to recommend one of the following 3 EFL modes for different driving conditions: driver’s EFL only, CVAM’s EFL only, and interchangeable EFL. In interchangeable EFL mode, driver’s EFL and CVAM’s EFL are interchangeable. The selection of two EFLs is a typical binary classification problem, so we apply support vector machines (SVMs) to solve this problem. We also provide a quantitative evaluation of the classifiers. The performance evaluation of the proposed recommendation method indicates that it is potentially useful to ADAS for future safe driving

    Vibration displacement measurement technology for cylindrical structures using camera images

    Get PDF
    Acceleration sensors are usually used to measure the vibration of a structure. Although this is the most accurate method, it cannot be used remotely because these are contact-type sensors. This makes measurement difficult in areas that cannot be easily approached by surveyors, such as structures located in high or dangerous areas. Therefore, a method that can measure the structural vibration without installing sensors is required for the vibration measurement of structures located in these areas. Many conventional studies have been carried out on non–contact-type vibration measurement methods using cameras. However, they have been applied to structures with relatively large vibration displacements such as buildings or bridges, and since most of them use targets, people still have to approach the structure to install the targets. Therefore, a new method is required to supplement the weaknesses of the conventional methods. In this paper, a method is proposed to measure vibration displacements remotely using a camera without having to approach the structure. Furthermore, an estimation method for the measurement resolution and measurement error is proposed for the vibration displacement of a cylindrical structure measured using the proposed measurement method. The proposed methods are described, along with experimental results that verify their accuracy

    Gaucher's Disease: A Case Report

    No full text
    corecore