5 research outputs found

    Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras

    Full text link
    Neuromorphic vision sensors, or event cameras, differ from conventional cameras in that they do not capture images at a specified rate. Instead, they asynchronously log local brightness changes at each pixel. As a result, event cameras only record changes in a given scene, and do so with very high temporal resolution, high dynamic range, and low power requirements. Recent research has demonstrated how these characteristics make event cameras extremely practical sensors in driver monitoring systems (DMS), enabling the tracking of high-speed eye motion and blinks. This research provides a proof of concept to expand event-based DMS techniques to include seatbelt state detection. Using an event simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants was generated from a near-infrared (NIR) dataset, and split into training, validation, and test sets for a seatbelt state detection algorithm based on a recurrent convolutional neural network (CNN). In addition, a smaller set of real event data was collected and reserved for testing. In a binary classification task, the fastened/unfastened frames were identified with an F1 score of 0.989 and 0.944 on the simulated and real test sets respectively. When the problem extended to also classify the action of fastening/unfastening the seatbelt, respective F1 scores of 0.964 and 0.846 were achieved.Comment: 4 pages, 3 figures, IMVIP 202

    Sensing Without Seeing - Event Camera Privacy

    No full text
    Event Cameras- “Privacy Preserving" ? •No, although event camera present sequential variable data they can still be reconstructed. •Controlled Reconstruction: System designers have the flexibility to determine the extent of reconstruction within the camera/machine vision pipeline. •Event Cameras are LESS-INVASIVE compared to traditional Cameras

    Optimization of Event Camera Bias Settings for a Neuromorphic Driver Monitoring System

    No full text
    Event cameras provide a novel imaging technology for high-speed analysis of localized facial motions such as eye gaze, eye-blink and micro-expressions by taking input at the level of an individual pixel. Due to this capability, and lightweight amount of the output data these cameras are being evaluated as a viable option for driver monitoring systems (DMS). This research is the first to investigate the impact of bias modifications on the event-based DMS output and propose an approach for evaluating and comparing DMS performance. The study investigates the impact of pixel-bias alteration on DMS features, which are: face tracking, blink counting, head pose and gaze estimation. In order to do this, new metrics are proposed to evaluate how effectively the DMS performs for each feature and overall. These metrics identify stability as the most important factor for face tracking, head pose estimations, and gaze estimations. The accuracy of the blink counting, which is the key component of this function, is also evaluated. Finally, all of these metrics are used to assess the system’s overall performance. The effects of bias changes on each feature are explored on a number of human subjects with their consent. The newly proposed metrics are used to determine the ideal bias ranges for each DMS feature and the overall performance. The results indicate that the DMS’s functioning is enhanced with proper bias tuning based on the proposed metrics

    Neuromorphic Driver Monitoring Systems: A Proof-of-Concept for Yawn Detection and Seatbelt State Detection Using an Event Camera

    No full text
    Driver monitoring systems (DMS) are a key component of vehicular safety and essential for the transition from semi-autonomous to fully autonomous driving. Neuromorphic vision systems, based on event camera technology, provide advanced sensing in motion analysis tasks. In particular, the behaviours of drivers’ eyes have been studied for the detection of drowsiness and distraction. This research explores the potential to extend neuromorphic sensing techniques to analyse the entire facial region, detecting yawning behaviours that give a complimentary indicator of drowsiness. A second proof of concept for the use of event cameras to detect the fastening or unfastening of a seatbelt is also developed. Synthetic training datasets are derived from RGB and Near-Infrared (NIR) video from both private and public datasets using a video-to-event converter and used to train, validate, and test a convolutional neural network (CNN) with a self-attention module and a recurrent head for both yawning and seatbelt tasks. For yawn detection, respective F1-scores of 95.3% and 90.4% were achieved on synthetic events from our test set and the “YawDD” dataset. For seatbelt fastness detection, 100% accuracy was achieved on unseen test sets of both synthetic and real events. These results demonstrate the feasibility to add yawn detection and seatbelt fastness detection components to neuromorphic DMS
    corecore