13 research outputs found
Neuromorphic Seatbelt State Detection for In-Cabin Monitoring with Event Cameras
Neuromorphic vision sensors, or event cameras, differ from conventional
cameras in that they do not capture images at a specified rate. Instead, they
asynchronously log local brightness changes at each pixel. As a result, event
cameras only record changes in a given scene, and do so with very high temporal
resolution, high dynamic range, and low power requirements. Recent research has
demonstrated how these characteristics make event cameras extremely practical
sensors in driver monitoring systems (DMS), enabling the tracking of high-speed
eye motion and blinks. This research provides a proof of concept to expand
event-based DMS techniques to include seatbelt state detection. Using an event
simulator, a dataset of 108,691 synthetic neuromorphic frames of car occupants
was generated from a near-infrared (NIR) dataset, and split into training,
validation, and test sets for a seatbelt state detection algorithm based on a
recurrent convolutional neural network (CNN). In addition, a smaller set of
real event data was collected and reserved for testing. In a binary
classification task, the fastened/unfastened frames were identified with an F1
score of 0.989 and 0.944 on the simulated and real test sets respectively. When
the problem extended to also classify the action of fastening/unfastening the
seatbelt, respective F1 scores of 0.964 and 0.846 were achieved.Comment: 4 pages, 3 figures, IMVIP 202
Heart rate detection using an event camera
Event cameras, also known as neuromorphic cam- eras, are an emerging technology that offer advantages over traditional shutter and frame-based cameras, including high temporal resolution, low power consumption, and selective data acquisition. In this study we harnesses the capabilities of event-based cameras to capture subtle changes in the surface of the skin caused by the pulsatile flow of blood in the wrist region. We show how an event camera can be used for continuous non-invasive monitoring of heart rate (HR). Event camera video data from 25 participants with varying age groups and skin colours, was collected and analysed. Ground-truth HR measurements were used to evaluate of the accuracy of automatic detection of HR from event camera data. Our results demonstrate the feasibility of using event cameras for HR detection
Event Camera-Based Eye Motion Analysis: A Survey
Neuromorphic vision sensors, commonly referred to as Event Cameras (ECs), have gained prominence as a field of research in Computer Vision. This popularity stems from the numerous unique characteristics including High Dynamic Range, High Temporal Resolution, and Low Latency. Of particular interest is their temporal resolution, which proves ideal for human monitoring applications. Capturing rapid facial movements and eye gaze can be effectively achieved with ECs. Recent studies involving the use of ECs for object detection and tracking have demonstrated success in tasks involving Eye Motion Analysis such as Eye tracking, Blink detection, Gaze estimation and Pupil tracking. The objective of this study is to provide a comprehensive review of the current research in the aforementioned tasks, focusing on the potential utilization of ECs for future tasks involving rapid eye motion detection, such as detection and classification of saccades. We highlight studies that may serve as a foundation for undertaking such a task, such as pupil tracking and gaze estimation. We also highlight in our review some common challenges encountered such as the availability of datasets and review some of the methods used in solving this problem. Finally, we discuss some limitations of this field of research and conclude with future directions including real-world applications and potential research directions
Recommended from our members
The Status of Pediatric Critical Care (PCC) Experience in Emergency Medicine (EM) Residency Training Programs
Sensing Without Seeing - Event Camera Privacy
Event Cameras- âPrivacy Preserving" ?
â˘No, although event camera present sequential variable data they can still be reconstructed.
â˘Controlled Reconstruction: System designers have the flexibility to determine the extent of reconstruction within the camera/machine vision pipeline.
â˘Event Cameras are LESS-INVASIVE compared to traditional Cameras
On the role of thermal imaging in automotive applications: A critical review
For decades, the number of automobiles in urban areas around the world has been increasing.
It causes serious challenges such as traffic congestion, accidents, and pollution, which have a social,
economic, and environmental impact on widespread urban cities. To overcome these challenges, we need to
explore smart AI-based perception systems for vehicular applications. Such types of systems can provide
improved situational awareness to the driver and generate early alarm about upcoming obstacles and road
incidents. In this study, we have presented the effective use of uncooled thermal IR sensors for designing
smart thermal perception systems as an alternative to CMOS visible imaging by presenting state-of-the-art
studies for in-cabin and out-cabin vehicular applications with potential long-term benefits for the automotive
industry. The key rationale for selecting thermal IR sensors over conventional image sensors is that visible
cameras are highly dependent on lighting conditions and performance is degraded significantly in lowlighting scenarios and harsh weather conditions. Contrary to this, thermal sensors remain largely unaffected
by external lighting conditions or most environmental conditions, making it a perfect optical sensor choice
for all-weather and harsh environmental conditions. This study presents a review of the current state of the
art for automotive thermal imaging with a focus on the contributions and advances achieved by the EUfunded project âHELIAUSâ in the domain of AI-based thermal imaging pipelines for safer and reliable road
journeys.This work was carried under EU funded Heliaus project under grant agreement no 826131Peer reviewe
Optimization of Event Camera Bias Settings for a Neuromorphic Driver Monitoring System
Event cameras provide a novel imaging technology for high-speed analysis of localized facial motions such as eye gaze, eye-blink and micro-expressions by taking input at the level of an individual pixel. Due to this capability, and lightweight amount of the output data these cameras are being evaluated as a viable option for driver monitoring systems (DMS). This research is the first to investigate the impact of bias modifications on the event-based DMS output and propose an approach for evaluating and comparing DMS performance. The study investigates the impact of pixel-bias alteration on DMS features, which are: face tracking, blink counting, head pose and gaze estimation. In order to do this, new metrics are proposed to evaluate how effectively the DMS performs for each feature and overall. These metrics identify stability as the most important factor for face tracking, head pose estimations, and gaze estimations. The accuracy of the blink counting, which is the key component of this function, is also evaluated. Finally, all of these metrics are used to assess the system’s overall performance. The effects of bias changes on each feature are explored on a number of human subjects with their consent. The newly proposed metrics are used to determine the ideal bias ranges for each DMS feature and the overall performance. The results indicate that the DMS’s functioning is enhanced with proper bias tuning based on the proposed metrics
Neuromorphic Driver Monitoring Systems: A Proof-of-Concept for Yawn Detection and Seatbelt State Detection Using an Event Camera
Driver monitoring systems (DMS) are a key component of vehicular safety and essential for the transition from semi-autonomous to fully autonomous driving. Neuromorphic vision systems, based on event camera technology, provide advanced sensing in motion analysis tasks. In particular, the behaviours of drivers’ eyes have been studied for the detection of drowsiness and distraction. This research explores the potential to extend neuromorphic sensing techniques to analyse the entire facial region, detecting yawning behaviours that give a complimentary indicator of drowsiness. A second proof of concept for the use of event cameras to detect the fastening or unfastening of a seatbelt is also developed. Synthetic training datasets are derived from RGB and Near-Infrared (NIR) video from both private and public datasets using a video-to-event converter and used to train, validate, and test a convolutional neural network (CNN) with a self-attention module and a recurrent head for both yawning and seatbelt tasks. For yawn detection, respective F1-scores of 95.3% and 90.4% were achieved on synthetic events from our test set and the “YawDD” dataset. For seatbelt fastness detection, 100% accuracy was achieved on unseen test sets of both synthetic and real events. These results demonstrate the feasibility to add yawn detection and seatbelt fastness detection components to neuromorphic DMS
Real-Time Multi-Task Facial Analytics With Event Cameras
Event cameras, unlike traditional frame-based cameras, excel in detecting and reporting changes in light intensity on a per-pixel basis. This unique technology offers numerous advantages, including high temporal resolution, low latency, wide dynamic range, and reduced power consumption. These characteristics make event cameras particularly well-suited for sensing applications such as monitoring drivers or human behavior. This paper presents a feasibility study on the using a multitask neural network with event cameras for real-time facial analytics. Our proposed network simultaneously estimates head pose, eye gaze, and facial occlusions. Notably, the network is trained on synthetic event camera data, and its performance is demonstrated and validated using real event data in real-time driving scenarios. To compensate for global head motion, we introduce a novel event integration method capable of handling both short and long-term temporal dependencies. The performance of our facial analytics method is quantitatively evaluated in both controlled lab environments and unconstrained driving scenarios. The results demonstrate that useful accuracy and computational speed is achieved by the proposed method to determining head pose and relative eye-gaze direction. This shows that neuromorphic facial analytics can be realized in real-time and are well-suited for edge/embedded computing deployments. While the improvement ratio in comparison to existing literature may not be as favorable due to the unique event-based vision approach employed, it is crucial to note that our research focuses specifically on event-based vision, which offers distinct advantages over traditional RGB vision. Overall, this study contributes to the emerging field of event-based vision systems and highlights the potential of multitask neural networks combined with event cameras for real-time sensing of human subjects. These techniques can be applied in practical applications such as driver monitoring systems, interactive human-computer systems and for human behavior analysis
Multisystem Inflammatory Syndrome in Children
Multisystem inflammatory syndrome in children (MIS-C) is an uncommon but emerging syndrome related to SARS-CoV-2 infection. While the presentation of MIS-C is generally delayed after exposure to the virus that causes coronavirus 2019, both MIS-C and Kawasaki disease (KD) share similar clinical features. Multisystem inflammatory syndrome in children poses a diagnostic and therapeutic challenge given the lack of definitive diagnostic tests and a paucity of evidence regarding treatment modalities. We review the clinical presentation, diagnostic evaluations, and management of MIS-C and compare its clinical features to those of KD