14 research outputs found

    Utilizing Gaze Detection to Enhance Voice-Based Accessibility Services

    Get PDF
    For users that use voice as their primary mode of input, operating a computing device can be difficult due to potential false positives (e.g., unintended voice commands by the user, background noise such as a radio). Voice commands can also be difficult to decipher, resulting in the voice-based accessibility service needing additional, clarifying user input to disambiguate the auditory commands. This publication describes techniques and procedures for utilizing gaze detection to enhance voice-based accessibility services on a computing device, such as a smartphone or computer. The computing device utilizes camera image input and a machine-learned model to produce an estimated x-y coordinate of where the user is gazing on a display of the computing device. Utilizing the machine-learned model, if the computing device determines that the user is looking at the computing device’s display (i.e., giving the device attention), then auditory commands are accepted; otherwise, if the user is not giving the device attention, then auditory commands can be ignored. Additionally, the techniques and procedures can assist in disambiguation (e.g., similar sounding commands, identically titled functions). Finally, the techniques and procedures can be used as an alternative means for controlling the scrolling of the display of the device

    Learning to Find Eye Region Landmarks for Remote Gaze Estimation in Unconstrained Settings

    Full text link
    Conventional feature-based and model-based gaze estimation methods have proven to perform well in settings with controlled illumination and specialized cameras. In unconstrained real-world settings, however, such methods are surpassed by recent appearance-based methods due to difficulties in modeling factors such as illumination changes and other visual artifacts. We present a novel learning-based method for eye region landmark localization that enables conventional methods to be competitive to latest appearance-based methods. Despite having been trained exclusively on synthetic data, our method exceeds the state of the art for iris localization and eye shape registration on real-world imagery. We then use the detected landmarks as input to iterative model-fitting and lightweight learning-based gaze estimation methods. Our approach outperforms existing model-fitting and appearance-based methods in the context of person-independent and personalized gaze estimation

    UNDERSTANDING AND PREDICTING HUMAN VISUAL ATTENTION

    No full text
    An understanding of how the human visual system works is essential for many applications in computer vision, computer graphics, computational photography, psychology, sociology, and human-computer-interaction. To provide the research community with access to easier, cheaper eye tracking data for developing and evaluating computational models for human visual attention, this thesis introduces a webcam-based gaze tracking system that supports large-scale, crowdsourced eye tracking deployed on a crowd-sourcing platform. By using this tool, we also provide a benchmark data set to quantitatively compare existing and future models for saliency prediction. To explore where people look while performing complicated tasks in an interactive environment, we introduce a method to synthesize user interface layouts, present a computational model to predict users' spatio-temporal visual attention for graphical user interfaces, and show that our model outperforms existing methods. In addition, we explore how visual stimuli affect brain signals extracted by fMRI. Our tool for crowd-sourced eye tracking, a large data set for scene image saliency, models for user interface layouts synthesis and visual attention prediction and study for visual stimuli driven change of brain connectivity should be useful resources for future researchers to create more powerful computational models for human visual attention

    The critical interaction between valproate sodium and warfarin: case report and review

    No full text
    Abstract Background Valproic acid (VPA) and warfarin are commonly prescribed for patients with epilepsy and concomitant atrial fibrillation (AF). When VPA and warfarin are prescribed together, clinically important interactions may occur. VPA may replace warfarin from the protein binding sites and result in an abnormally increased anticoagulation effect. This is commonly underrecognized. Case presentation In our case, we report a 78-year-old woman with a glioma who presented with status epilepticus. The patient was on warfarin to prevent cardiogenic embolism secondary to AF. Intravenous loading dose of VPA was administered, but international normalized ratio (INR) increased significantly to 8.26. Intravenous vitamin K1 was then given and the patient developed no overt bleeding during the hospitalization. Conclusion By reviewing the literature and discussing the critical interaction between valproate sodium and warfarin, we conclude that intravenous VPA and the co-administrated warfarin may develop critical but underrecognized complications due to effects on the function of hepatic enzymes and displacement of protein binding sites
    corecore