6 research outputs found

    Analysing Pedestrian Traffic Around Public Displays

    Get PDF
    This paper presents a powerful approach to evaluating public technologies by capturing and analysing pedestrian traffic using computer vision. This approach is highly flexible and scales better than traditional ethnographic techniques often used to evaluate technology in public spaces. This technique can be used to evaluate a wide variety of public installations and the data collected complements existing approaches. Our technique allows behavioural analysis of both interacting users and non-interacting passers-by. This gives us the tools to understand how technology changes public spaces, how passers-by approach or avoid public technologies, and how different interaction styles work in public spaces. In the paper, we apply this technique to two large public displays and a street performance. The results demonstrate how metrics such as walking speed and proximity can be used for analysis, and how this can be used to capture disruption to pedestrian traffic and passer-by approach patterns

    Understanding Public Evaluation: Quantifying Experimenter Intervention

    Get PDF
    Public evaluations are popular because some research questions can only be answered by turning “to the wild.” Different approaches place experimenters in different roles during deployment, which has implications for the kinds of data that can be collected and the potential bias introduced by the experimenter. This paper expands our understanding of how experimenter roles impact public evaluations and provides an empirical basis to consider different evaluation approaches. We completed an evaluation of a playful gesture-controlled display – not to understand interaction at the display but to compare different evaluation approaches. The conditions placed the experimenter in three roles, steward observer, overt observer, and covert observer, to measure the effect of experimenter presence and analyse the strengths and weaknesses of each approach

    Creating Your Bubble: Personal Space On and Around Large Public Displays

    Get PDF
    We describe an empirical study that explores how users establish and use personal space around large public displays (LPDs). Our study complements field studies in this space by more fully characterizing interpersonal distances based on coupling and confirms the use of on-screen territories on vertical displays. Finally, we discuss implications for future research: limitations of proxemics and territoriality, how user range can augment existing theory, and the influence of display size on personal space

    People Watcher: an app to record and analyzing spatial behavior of ubiquitous interaction technologies

    Get PDF
    In this paper we argue that interfaces embedded in the world, one of the core objectives of ubiquitous computing, require interaction designers and researchers to have a stronger understanding of the environment as an aspect of the interaction process. We suggest that the interaction community needs new tools to accurately record and, as importantly, analyze interaction in space. We present one solution: People Watcher, a freely downloadable, iPad Application, specifically designed to address the ‘usability in space’ issues. The paper reports a case study of the software’s use. We go on to encourage researchers to adopt this tool as part of the wider process of understanding the effect of the spatial context in interaction design

    Design Considerations for Multi-Stakeholder Display Analytics

    Get PDF
    Measuring viewer interactions through detailed analytics will be crucial to improving the overall performance of future open display networks. However, in contrast to traditional sign and web analytics systems, such display networks are likely to feature multiple stakeholders each with the ability to collect a subset of the required analytics information. Combining analytics data from multiple stakeholders could lead to new insights, but stakeholders may have limited willingness to share information due to privacy concerns or commercial sensitivities. In this paper, we provide a comprehensive overview of analytics data that might be captured by different stakeholders in a display network, make the case for the synthesis of analytics data in such display networks, present design considerations for future architectures designed to enable the sharing of display analytics information, and offer an example of how such systems might be implemented

    Predicting mid-air gestural interaction with public displays based on audience behaviour

    Get PDF
    © 2020 Elsevier Ltd. All rights reserved. This manuscript is licensed under the Creative Commons Attribution-NonCommercial-NoDerivatives 4.0 International Licence http://creativecommons.org/licenses/by-nc-nd/4.0/.Knowledge about the expected interaction duration and expected distance from which users will interact with public displays can be useful in many ways. For example, knowing upfront that a certain setup will lead to shorter interactions can nudge space owners to alter the setup. If a system can predict that incoming users will interact at a long distance for a short amount of time, it can accordingly show shorter versions of content (e.g., videos/advertisements) and employ at-a-distance interaction modalities (e.g., mid-air gestures). In this work, we propose a method to build models for predicting users’ interaction duration and distance in public display environments, focusing on mid-air gestural interactive displays. First, we report our findings from a field study showing that multiple variables, such as audience size and behaviour, significantly influence interaction duration and distance. We then train predictor models using contextual data, based on the same variables. By applying our method to a mid-air gestural interactive public display deployment, we build a model that predicts interaction duration with an average error of about 8 s, and interaction distance with an average error of about 35 cm. We discuss how researchers and practitioners can use our work to build their own predictor models, and how they can use them to optimise their deployment.Peer reviewe
    corecore