15,626 research outputs found
Spartan Daily, October 21, 1983
Volume 81, Issue 38https://scholarworks.sjsu.edu/spartandaily/7087/thumbnail.jp
Spartan Daily, September 15, 1982
Volume 79, Issue 12https://scholarworks.sjsu.edu/spartandaily/6924/thumbnail.jp
GaVe: A webcam-based gaze vending interface using one-point calibration
Gaze input, i.e., information input via eye of users, represents a promising method for contact-free interaction in human-machine systems. In this paper, we present the GazeVending interface (GaVe), which lets users control actions on a display with their eyes. The interface works on a regular webcam, available on most of today's laptops, and only requires a short one-point calibration before use. GaVe is designed in a hierarchical structure, presenting broad item cluster to users first and subsequently guiding them through another selection round, which allows the presentation of a large number of items. Cluster/item selection in GaVe is based on the dwell time, i.e., the time duration that users look at a given Cluster/item. A user study (N=22) was conducted to test optimal dwell time thresholds and comfortable human-to-display distances. Users' perception of the system, as well as error rates and task completion time were registered. We found that all participants were able to quickly understand and know how to interact with the interface, and showed good performance, selecting a target item within a group of 12 items in 6.76 seconds on average. We provide design guidelines for GaVe and discuss the potentials of the system
Converting Your Thoughts to Texts: Enabling Brain Typing via Deep Feature Learning of EEG Signals
An electroencephalography (EEG) based Brain Computer Interface (BCI) enables
people to communicate with the outside world by interpreting the EEG signals of
their brains to interact with devices such as wheelchairs and intelligent
robots. More specifically, motor imagery EEG (MI-EEG), which reflects a
subjects active intent, is attracting increasing attention for a variety of BCI
applications. Accurate classification of MI-EEG signals while essential for
effective operation of BCI systems, is challenging due to the significant noise
inherent in the signals and the lack of informative correlation between the
signals and brain activities. In this paper, we propose a novel deep neural
network based learning framework that affords perceptive insights into the
relationship between the MI-EEG data and brain activities. We design a joint
convolutional recurrent neural network that simultaneously learns robust
high-level feature presentations through low-dimensional dense embeddings from
raw MI-EEG signals. We also employ an Autoencoder layer to eliminate various
artifacts such as background activities. The proposed approach has been
evaluated extensively on a large- scale public MI-EEG dataset and a limited but
easy-to-deploy dataset collected in our lab. The results show that our approach
outperforms a series of baselines and the competitive state-of-the- art
methods, yielding a classification accuracy of 95.53%. The applicability of our
proposed approach is further demonstrated with a practical BCI system for
typing.Comment: 10 page
Eye Tracking Impact on Quality-of-Life of ALS Patients
Chronic neurological disorders in their advanced phase are characterized by a progressive loss of mobility (use of upper and lower limbs), speech and social life. Some of these pathologies, such as amyotrophic lateral sclerosis and multiple sclerosis, are paradigmatic of these deficits. High technology communication instruments, such as eye tracking, can be an extremely important possibility to reintroduce these patients in their family and social life, in particular when they suffer severe disability. This paper reports and describes the results of an ongoing experimentation about Eye Tracking impact on the quality of life of amyotrophic lateral sclerosis patients. The aim of the experimentation is to evaluate if and when eye tracking technologies have a positive impact on patients' live
Assentication: User Deauthentication and Lunchtime Attack Mitigation with Seated Posture Biometric
Biometric techniques are often used as an extra security factor in
authenticating human users. Numerous biometrics have been proposed and
evaluated, each with its own set of benefits and pitfalls. Static biometrics
(such as fingerprints) are geared for discrete operation, to identify users,
which typically involves some user burden. Meanwhile, behavioral biometrics
(such as keystroke dynamics) are well suited for continuous, and sometimes more
unobtrusive, operation. One important application domain for biometrics is
deauthentication, a means of quickly detecting absence of a previously
authenticated user and immediately terminating that user's active secure
sessions. Deauthentication is crucial for mitigating so called Lunchtime
Attacks, whereby an insider adversary takes over (before any inactivity timeout
kicks in) authenticated state of a careless user who walks away from her
computer. Motivated primarily by the need for an unobtrusive and continuous
biometric to support effective deauthentication, we introduce PoPa, a new
hybrid biometric based on a human user's seated posture pattern. PoPa captures
a unique combination of physiological and behavioral traits. We describe a low
cost fully functioning prototype that involves an office chair instrumented
with 16 tiny pressure sensors. We also explore (via user experiments) how PoPa
can be used in a typical workplace to provide continuous authentication (and
deauthentication) of users. We experimentally assess viability of PoPa in terms
of uniqueness by collecting and evaluating posture patterns of a cohort of
users. Results show that PoPa exhibits very low false positive, and even lower
false negative, rates. In particular, users can be identified with, on average,
91.0% accuracy. Finally, we compare pros and cons of PoPa with those of several
prominent biometric based deauthentication techniques
- …