5,290 research outputs found
An Improved Fatigue Detection System Based on Behavioral Characteristics of Driver
In recent years, road accidents have increased significantly. One of the
major reasons for these accidents, as reported is driver fatigue. Due to
continuous and longtime driving, the driver gets exhausted and drowsy which may
lead to an accident. Therefore, there is a need for a system to measure the
fatigue level of driver and alert him when he/she feels drowsy to avoid
accidents. Thus, we propose a system which comprises of a camera installed on
the car dashboard. The camera detect the driver's face and observe the
alteration in its facial features and uses these features to observe the
fatigue level. Facial features include eyes and mouth. Principle Component
Analysis is thus implemented to reduce the features while minimizing the amount
of information lost. The parameters thus obtained are processed through Support
Vector Classifier for classifying the fatigue level. After that classifier
output is sent to the alert unit.Comment: 4 pages, 2 figures, edited version of published paper in IEEE ICITE
201
Event-based Face Detection and Tracking in the Blink of an Eye
We present the first purely event-based method for face detection using the
high temporal resolution of an event-based camera. We will rely on a new
feature that has never been used for such a task that relies on detecting eye
blinks. Eye blinks are a unique natural dynamic signature of human faces that
is captured well by event-based sensors that rely on relative changes of
luminance. Although an eye blink can be captured with conventional cameras, we
will show that the dynamics of eye blinks combined with the fact that two eyes
act simultaneously allows to derive a robust methodology for face detection at
a low computational cost and high temporal resolution. We show that eye blinks
have a unique temporal signature over time that can be easily detected by
correlating the acquired local activity with a generic temporal model of eye
blinks that has been generated from a wide population of users. We furthermore
show that once the face is reliably detected it is possible to apply a
probabilistic framework to track the spatial position of a face for each
incoming event while updating the position of trackers. Results are shown for
several indoor and outdoor experiments. We will also release an annotated data
set that can be used for future work on the topic
A Self-initializing Eyebrow Tracker for Binary Switch Emulation
We designed the Eyebrow-Clicker, a camera-based human computer interface system that implements a new form of binary switch. When the user raises his or her eyebrows, the binary switch is activated and a selection command is issued. The Eyebrow-Clicker thus replaces the "click" functionality of a mouse. The system initializes itself by detecting the user's eyes and eyebrows, tracks these features at frame rate, and recovers in the event of errors. The initialization uses the natural blinking of the human eye to select suitable templates for tracking. Once execution has begun, a user therefore never has to restart the program or even touch the computer. In our experiments with human-computer interaction software, the system successfully determined 93% of the time when a user raised his eyebrows.Office of Naval Research; National Science Foundation (IIS-0093367
Less is More: Micro-expression Recognition from Video using Apex Frame
Despite recent interest and advances in facial micro-expression research,
there is still plenty room for improvement in terms of micro-expression
recognition. Conventional feature extraction approaches for micro-expression
video consider either the whole video sequence or a part of it, for
representation. However, with the high-speed video capture of micro-expressions
(100-200 fps), are all frames necessary to provide a sufficiently meaningful
representation? Is the luxury of data a bane to accurate recognition? A novel
proposition is presented in this paper, whereby we utilize only two images per
video: the apex frame and the onset frame. The apex frame of a video contains
the highest intensity of expression changes among all frames, while the onset
is the perfect choice of a reference frame with neutral expression. A new
feature extractor, Bi-Weighted Oriented Optical Flow (Bi-WOOF) is proposed to
encode essential expressiveness of the apex frame. We evaluated the proposed
method on five micro-expression databases: CAS(ME), CASME II, SMIC-HS,
SMIC-NIR and SMIC-VIS. Our experiments lend credence to our hypothesis, with
our proposed technique achieving a state-of-the-art F1-score recognition
performance of 61% and 62% in the high frame rate CASME II and SMIC-HS
databases respectively.Comment: 14 pages double-column, author affiliations updated, acknowledgment
of grant support adde
- …