446 research outputs found

    Understanding face and eye visibility in front-facing cameras of smartphones used in the wild

    Get PDF
    Commodity mobile devices are now equipped with high-resolution front-facing cameras, allowing applications in biometrics (e.g., FaceID in the iPhone X), facial expression analysis, or gaze interaction. However, it is unknown how often users hold devices in a way that allows capturing their face or eyes, and how this impacts detection accuracy. We collected 25,726 in-the-wild photos, taken from the front-facing camera of smartphones as well as associated application usage logs. We found that the full face is visible about 29% of the time, and that in most cases the face is only partially visible. Furthermore, we identified an influence of users' current activity; for example, when watching videos, the eyes but not the entire face are visible 75% of the time in our dataset. We found that a state-of-the-art face detection algorithm performs poorly against photos taken from front-facing cameras. We discuss how these findings impact mobile applications that leverage face and eye detection, and derive practical implications to address state-of-the art's limitations

    EyePACT: eye-based parallax correction on touch-enabled interactive displays

    Get PDF
    The parallax effect describes the displacement between the perceived and detected touch locations on a touch-enabled surface. Parallax is a key usability challenge for interactive displays, particularly for those that require thick layers of glass between the screen and the touch surface to protect them from vandalism. To address this challenge, we present EyePACT, a method that compensates for input error caused by parallax on public displays. Our method uses a display-mounted depth camera to detect the user's 3D eye position in front of the display and the detected touch location to predict the perceived touch location on the surface. We evaluate our method in two user studies in terms of parallax correction performance as well as multi-user support. Our evaluations demonstrate that EyePACT (1) significantly improves accuracy even with varying gap distances between the touch surface and the display, (2) adapts to different levels of parallax by resulting in significantly larger corrections with larger gap distances, and (3) maintains a significantly large distance between two users' fingers when interacting with the same object. These findings are promising for the development of future parallax-free interactive displays

    "Mobiles for museum visit should be abolished" : a comparison of smart replicas, smart cards, and phones

    Get PDF
    A comparative evaluation of smart replicas, phone app and smart cards looked at the personal preferences of visitors and the appeal of mobiles in museum exhibitions. As part of an exhibition evaluation, 76 participants used all three interactions modes and gave their opinions in a questionnaire. The result shows that Phone and Replica are equally liked but the Phone is the most disliked interaction mode. Preference for the phone is due to its mobility as opposed to a listen in place interaction; but the phone takes the attention away from the exhibition and isolates from the group. Visitors expect museums to provide the phones as opposed to apps for "bring your own"

    Public HMDs: Modeling and Understanding User Behavior Around Public Head-Mounted Displays

    Get PDF
    Head-Mounted Displays (HMDs) are becoming ubiquitous; we are starting to see them deployed in public for different purposes. Museums, car companies and travel agencies use HMDs to promote their products. As a result, situations arise where users use them in public without experts supervision. This leads to challenges and opportunities, many of which are experienced in public display installations. For example, similar to public displays, public HMDs struggle to attract the passer-by's attention, but benefit from the honeypot effect that draws attention to them. Also passersby might be hesitant to wear a public HMD, due to the fear that its owner might not approve, or due to the perceived need for a prior permission. In this work, we discuss how public HMDs can benefit from research in public displays. In particular, based on the results of an in-the-wild deployment of a public HMD, we propose an adaptation of the audience funnel flow model of public display users to fit the context of public HMD usage. We discuss how public HMDs bring in challenges and opportunities, and create novel research directions that are relevant to both researchers in HMDs and researchers in public displays

    EyeScout: Active Eye Tracking for Position and Movement Independent Gaze Interaction with Large Public Displays

    Get PDF
    While gaze holds a lot of promise for hands-free interaction with public displays, remote eye trackers with their confined tracking box restrict users to a single stationary position in front of the display. We present EyeScout, an active eye tracking system that combines an eye tracker mounted on a rail system with a computational method to automatically detect and align the tracker with the user's lateral movement. EyeScout addresses key limitations of current gaze-enabled large public displays by offering two novel gaze-interaction modes for a single user: In "Walk then Interact" the user can walk up to an arbitrary position in front of the display and interact, while in "Walk and Interact" the user can interact even while on the move. We report on a user study that shows that EyeScout is well perceived by users, extends a public display's sweet spot into a sweet line, and reduces gaze interaction kick-off time to 3.5 seconds -- a 62% improvement over state of the art solutions. We discuss sample applications that demonstrate how EyeScout can enable position and movement-independent gaze interaction with large public displays

    Assessment of activity trackers: toward an acceptance model

    Get PDF
    In this project we seek to understand the factors that influence user acceptance of Activity Trackers, through a model that quantifies how users come to adhere to the use of Activity Trackers. The proposed research model and hypotheses were validated and tested with data collected from a cross-sectional survey conducted using a self-selected convenience sample. Constructs from half dozen of established models were gathered into a suppositional model, based on their hypothetical applicable relevance for the Activity Trackers use. The results were analyzed using a variety of statistical techniques including Structural Equation analysis. The final result can be a first step for researchers aiming to complement their own processes of study, ideation or design of Activity by taking into account factors such as Usefulness, Ease of Use, Health Consciousness, Hedonic Motivation, Image, Habit, etc.info:eu-repo/semantics/publishedVersio
    • …
    corecore