7 research outputs found

    Which One is Me?: Identifying Oneself on Public Displays

    Get PDF
    While user representations are extensively used on public displays, it remains unclear how well users can recognize their own representation among those of surrounding users. We study the most widely used representations: abstract objects, skeletons, silhouettes and mirrors. In a prestudy (N=12), we identify five strategies that users follow to recognize themselves on public displays. In a second study (N=19), we quantify the users' recognition time and accuracy with respect to each representation type. Our findings suggest that there is a significant effect of (1) the representation type, (2) the strategies performed by users, and (3) the combination of both on recognition time and accuracy. We discuss the suitability of each representation for different settings and provide specific recommendations as to how user representations should be applied in multi-user scenarios. These recommendations guide practitioners and researchers in selecting the representation that optimizes the most for the deployment's requirements, and for the user strategies that are feasible in that environment

    Public HMDs: Modeling and Understanding User Behavior Around Public Head-Mounted Displays

    Get PDF
    Head-Mounted Displays (HMDs) are becoming ubiquitous; we are starting to see them deployed in public for different purposes. Museums, car companies and travel agencies use HMDs to promote their products. As a result, situations arise where users use them in public without experts supervision. This leads to challenges and opportunities, many of which are experienced in public display installations. For example, similar to public displays, public HMDs struggle to attract the passer-by's attention, but benefit from the honeypot effect that draws attention to them. Also passersby might be hesitant to wear a public HMD, due to the fear that its owner might not approve, or due to the perceived need for a prior permission. In this work, we discuss how public HMDs can benefit from research in public displays. In particular, based on the results of an in-the-wild deployment of a public HMD, we propose an adaptation of the audience funnel flow model of public display users to fit the context of public HMD usage. We discuss how public HMDs bring in challenges and opportunities, and create novel research directions that are relevant to both researchers in HMDs and researchers in public displays

    Touch or Touchless? Evaluating Usability of Interactive Displays for Persons with Autistic Spectrum Disorders

    Get PDF
    Interactive public displays have been exploited and studied for engaging interaction in several previous studies. In this context, applications have been focused on supporting learning or entertainment activities, specifically designed for people with special needs. This includes, for example, those with Autism Spectrum Disorders (ASD). In this paper, we present a comparison study aimed at understanding the difference in terms of usability, effectiveness, and enjoyment perceived by users with ASD between two interaction modalities usually supported by interactive displays: touch-based and touchless gestural interaction. We present the outcomes of a within-subject setup involving 8 ASD users (age 18-25 y.o., IQ 40-60), based on the use of two similar user interfaces, differing only by the interaction modality. We show that touch interaction provides higher usability level and results in more effective actions, although touchless interaction is more effective in terms of enjoyment and engagemen

    Virtual Field Studies: Conducting Studies on Public Displays in Virtual Reality

    Get PDF
    Field studies on public displays can be difficult, expensive, and time-consuming. We investigate the feasibility of using virtual reality (VR) as a test-bed to evaluate deployments of public displays. Specifically, we investigate whether results from virtual field studies, conducted in a virtual public space, would match the results from a corresponding real-world setting. We report on two empirical user studies where we compared audience behavior around a virtual public display in the virtual world to audience behavior around a real public display. We found that virtual field studies can be a powerful research tool, as in both studies we observed largely similar behavior between the settings. We discuss the opportunities, challenges, and limitations of using virtual reality to conduct field studies, and provide lessons learned from our work that can help researchers decide whether to employ VR in their research and what factors to account for if doing so

    Predicting mid-air gestural interaction with public displays based on audience behaviour

    Get PDF
    © 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Knowledge about the expected interaction duration and expected distance from which users will interact with public displays can be useful in many ways. For example, knowing upfront that a certain setup will lead to shorter interactions can nudge space owners to alter the setup. If a system can predict that incoming users will interact at a long distance for a short amount of time, it can accordingly show shorter versions of content (e.g., videos/advertisements) and employ at-a-distance interaction modalities (e.g., mid-air gestures). In this work, we propose a method to build models for predicting users’ interaction duration and distance in public display environments, focusing on mid-air gestural interactive displays. First, we report our findings from a field study showing that multiple variables, such as audience size and behaviour, significantly influence interaction duration and distance. We then train predictor models using contextual data, based on the same variables. By applying our method to a mid-air gestural interactive public display deployment, we build a model that predicts interaction duration with an average error of about 8 s, and interaction distance with an average error of about 35 cm. We discuss how researchers and practitioners can use our work to build their own predictor models, and how they can use them to optimise their deployment.Peer reviewe

    CueAuth:Comparing Touch, Mid-Air Gestures, and Gaze for Cue-based Authentication on Situated Displays

    Get PDF
    Secure authentication on situated displays (e.g., to access sensitive information or to make purchases) is becoming increasingly important. A promising approach to resist shoulder surfing attacks is to employ cues that users respond to while authenticating; this overwhelms observers by requiring them to observe both the cue itself as well as users’ response to the cue. Although previous work proposed a variety of modalities, such as gaze and mid-air gestures, to further improve security, an understanding of how they compare with regard to usability and security is still missing as of today. In this paper, we rigorously compare modalities for cue-based authentication on situated displays. In particular, we provide the first comparison between touch, mid-air gestures, and calibration-free gaze using a state-of-the-art authentication concept. In two in-depth user studies (N=37) we found that the choice of touch or gaze presents a clear trade-off between usability and security. For example, while gaze input is more secure, it is also more demanding and requires longer authentication times. Mid-air gestures are slightly slower and more secure than touch but users hesitate to use them in public. We conclude with three significant design implications for authentication using touch, mid-air gestures, and gaze and discuss how the choice of modality creates opportunities and challenges for improved authentication in public

    Designing gaze-based interaction for pervasive public displays

    Get PDF
    The last decade witnessed an increasing adoption of public interactive displays. Displays can now be seen in many public areas, such as shopping malls, and train stations. There is also a growing trend towards using large public displays especially in airports, urban areas, universities and libraries. Meanwhile, advances in eye tracking and visual computing promise straightforward integration of eye tracking on these displays for both: 1) monitoring the user's visual behavior to evaluate different aspects of the display, such as measuring the visual attention of passersby, and for 2) interaction purposes, such as allowing users to provide input, retrieve content, or transfer data using their eye movements. Gaze is particularly useful for pervasive public displays. In addition to being natural and intuitive, eye gaze can be detected from a distance, bringing interactivity to displays that are physically unreachable. Gaze reflects the user's intention and visual interests, and its subtle nature makes it well-suited for public interactions where social embarrassment and privacy concerns might hinder the experience. On the downside, eye tracking technologies have traditionally been developed for desktop settings, where a user interacts from a stationary position and for a relatively long period of time. Interaction with public displays is fundamentally different and hence poses unique challenges when employing eye tracking. First, users of public displays are dynamic; users could approach the display from different directions, and interact from different positions or even while moving. This means that gaze-enabled displays should not expect users to be stationary at a specific position, but instead adapt to users' ever-changing position in front of the display. Second, users of public displays typically interact for short durations, often for a few seconds only. This means that contrary to desktop settings, public displays cannot afford requiring users to perform time-consuming calibration prior to interaction. In this publications-based dissertation, we first report on a review of challenges of interactive public displays, and discuss the potential of gaze in addressing these challenges. We then showcase the implementation and in-depth evaluation of two applications where gaze is leveraged to address core problems in today's public displays. The first presents an eye-based solution, EyePACT, that tackles the parallax effect which is often experienced on today's touch-based public displays. We found that EyePACT significantly improves accuracy even with varying degrees of parallax. The second is a novel multimodal system, GTmoPass, that combines gaze and touch input for secure user authentication on public displays. GTmoPass was found to be highly resilient to shoulder surfing, thermal attacks and smudge attacks, thereby offering a secure solution to an important problem on public displays. The second part of the dissertation explores specific challenges of gaze-based interaction with public displays. First, we address the user positioning problem by means of active eye tracking. More specifically, we built a novel prototype, EyeScout, that dynamically moves the eye tracker based on the user's position without augmenting the user. This, in turn, allowed us to study and understand gaze-based interaction with public displays while walking, and when approaching the display from different positions. An evaluation revealed that EyeScout is well perceived by users, and improves the time needed to initiate gaze interaction by 62% compared to state-of-the-art. Second, we propose a system, Read2Calibrate, for calibrating eye trackers implicitly while users read text on displays. We found that although text-based calibration is less accurate than traditional methods, it integrates smoothly while reading and thereby more suitable for public displays. Finally, through our prototype system, EyeVote, we show how to allow users to select textual options on public displays via gaze without calibration. In a field deployment of EyeVote, we studied the trade-off between accuracy and selection speed when using calibration-free selection techniques. We found that users of public displays value faster interactions over accurate ones, and are willing to correct system errors in case of inaccuracies. We conclude by discussing the implications of our findings on the design of gaze-based interaction for public displays, and how our work can be adapted for other domains apart from public displays, such as on handheld mobile devices.In den letzten zehn Jahren wurden vermehrt interaktive Displays in öffentlichen Bereichen wie Einkaufszentren, Flughäfen und Bahnhöfen eingesetzt. Große öffentliche Displays finden sich zunehmend in städtischen Gebieten, beispielsweise in Universitäten und Bibliotheken. Fortschritte in der Eye-Tracking-Technologie und der Bildverarbeitung versprechen eine einfache Integration von Eye-Tracking auf diesen Displays. So kann zum einen das visuelle Verhalten der Benutzer verfolgt und damit ein Display nach verschiedenen Aspekten evaluiert werden. Zum anderen eröffnet Eye-Tracking auf öffentlichen Displays neue Interaktionsmöglichkeiten. Blickbasierte Interaktion ist besonders nützlich für Bildschirme im allgegenwärtigen öffentlichen Raum. Der Blick bietet mehr als eine natürliche und intuitive Interaktionsmethode: Blicke können aus der Ferne erkannt und somit für Interaktion mit sonst unerreichbaren Displays genutzt werden. Aus der Interaktion mit dem Blick (Gaze) lassen sich Absichten und visuelle Interessen der Benutzer ableiten. Dadurch eignet es sich besonders für den öffentlichen Raum, wo Nutzer möglicherweise Datenschutzbedenken haben könnten oder sich bei herkömmlichen Methoden gehemmt fühlen würden in der Öffentlichkeit mit den Displays zu interagieren. Dadurch wird ein uneingeschränktes Nutzererlebnis ermöglicht. Eye-Tracking-Technologien sind jedoch in erster Linie für Desktop-Szenarien entwickelt worden, bei denen ein Benutzer für eine relativ lange Zeitspanne in einer stationären Position mit dem System interagiert. Die Interaktion mit öffentlichen Displays ist jedoch grundlegend anders. Daher gilt es völlig neuartige Herausforderungen zu bewältigen, wenn Eye-Tracking eingesetzt wird. Da sich Nutzer von öffentlichen Displays bewegen, können sie sich dem Display aus verschiedenen Richtungen nähern und sogar währenddessen mit dem Display interagieren. Folglich sollten "Gaze-enabled Displays" nicht davon ausgehen, dass Nutzer sich stets an einer bestimmten Position befinden, sondern sollten sich an die ständig wechselnde Position des Nutzers anpassen können. Zum anderen interagieren Nutzer von öffentlichen Displays üblicherweise nur für eine kurze Zeitspannen von ein paar Sekunden. Eine zeitaufwändige Kalibrierung durch den Nutzer vor der eigentlichen Interaktion ist hier im Gegensatz zu Desktop-Szenarien also nicht adäquat. Diese kumulative Dissertation überprüft zunächst die Herausforderungen interaktiver öffentlicher Displays und diskutiert das Potenzial von blickbasierter Interaktion zu deren Bewältigung. Anschließend wird die Implementierung und eingehende Evaluierung von zwei beispielhaften Anwendungen vorgestellt, bei denen Nutzer durch den Blick mit öffentlichen Displays interagieren. Daraus ergeben sich weitere greifbare Vorteile der blickbasierten Interaktion für öffentliche Display-Kontexte. Bei der ersten Anwendung, EyePACT, steht der Parallaxeneffekt im Fokus, der heutzutage häufig ein Problem auf öffentlichen Displays darstellt, die über Berührung (Touch) gesteuert werden. Die zweite Anwendung ist ein neuartiges multimodales System, GTmoPass, das Gaze- und Touch-Eingabe zur sicheren Benutzerauthentifizierung auf öffentlichen Displays kombiniert. GTmoPass ist sehr widerstandsfähig sowohl gegenüber unerwünschten fremden Blicken als auch gegenüber sogenannten thermischen Angriffen und Schmierangriffen. Es bietet damit eine sichere Lösung für ein wichtiges Sicherheits- und Datenschutzproblem auf öffentlichen Displays. Der zweite Teil der Dissertation befasst sich mit spezifischen Herausforderungen der Gaze-Interaktion mit öffentlichen Displays. Zuerst wird der Aspekt der Benutzerpositionierung durch aktives Eye-Tracking adressiert. Der neuartige Prototyp EyeScout bewegt den Eye-Tracker passend zur Position des Nutzers, ohne dass dieser dafür mit weiteren Geräten oder Sensoren ausgestattet werden muss. Dies ermöglicht blickbasierte Interaktion mit öffentlichen Displays auch in jenen Situationen zu untersuchen und zu verstehen, in denen Nutzer in Bewegung sind und sich dem Display von verschiedenen Positionen aus nähern. Zweitens wird das System Read2Calibrate präsentiert, das Eye-Tracker implizit kalibriert, während Nutzer Texte auf Displays lesen. Der Prototyp EyeVote zeigt, wie man die Auswahl von Textantworten auf öffentlichen Displays per Blick ohne Kalibrierung ermöglichen kann. In einer Feldstudie mit EyeVote wird der Kompromiss zwischen Genauigkeit und Auswahlgeschwindigkeit unter der Verwendung kalibrierungsfreier Auswahltechniken untersucht. Die Implikationen der Ergebnisse für das Design von blickbasierter Interaktion öffentlicher Displays werden diskutiert. Abschließend wird erörtert wie die verwendete Methodik auf andere Bereiche, z.B. auf mobilie Geräte, angewendet werden kann
    corecore