104 research outputs found

    Measuring interaction proxemics with wearable light tags

    Get PDF
    The proxemics of social interactions (e.g., body distance, relative orientation) in!uences many aspects of our everyday life: from patients’ reactions to interaction with physicians, successes in job interviews, to effective teamwork. Traditionally, interaction proxemics has been studied via questionnaires and participant observations, imposing high burden on users, low scalability and precision, and often biases. In this paper we present Protractor, a novel wearable technology for measuring interaction proxemics as part of non-verbal behavior cues with# ne granularity. Protractor employs near-infrared light to monitor both the distance and relative body orientation of interacting users. We leverage the characteristics of near-infrared light (i.e., line-of-sight propagation) to accurately and reliably identify interactions; a pair of collocated photodiodes aid the inference of relative interaction angle and distance. We achieve robustness against temporary blockage of the light channel (e.g., by the user’s hand or clothes) by designing sensor fusion algorithms that exploit inertial sensors to obviate the absence of light tracking results. We fabricated Protractor tags and conducted real-world experiments. Results show its accuracy in tracking body distances and relative angles. The framework achieves less than 6 error 95% of the time for measuring relative body orientation and 2.3-cm – 4.9-cm mean error in estimating interaction distance. We deployed Protractor tags to track user’s non-verbal behaviors when conducting collaborative group tasks. Results with 64 participants show that distance and angle data from Protractor tags can help assess individual’s task role with 84.9% accuracy, and identify task timeline with 93.2% accuracy

    Social motion : mobile networking through sensing human behavior

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2006."September 2006."Includes bibliographical references (leaves 62-65).Low-level sensors can provide surprisingly high-level information about social interactions. The goal of this thesis is to define the components of a framework for sensing social context with mobile devices. We describe several sensing technologies - including infrared transceivers, radio frequency scanners, and accelerometers - that both capture social signals and meet the design constraints of mobile devices. Through the analysis of several large datasets, we identify features from these sensors that correlate well with the underlying social structure of interacting groups of people. We then detail the work that we have done creating infrastructure that integrates social sensors into social applications that run on mobile devices.by Jonathan Peter Gips.S.M

    The Case for Public Interventions during a Pandemic

    Get PDF
    Funding Information: This work has been supported by Marie Skłodowska Curie Actions ITN AffecTech (ERC H2020 Project 1059 ID: 722022). Publisher Copyright: © 2022 by the authors.Within the field of movement sensing and sound interaction research, multi-user systems have gradually gained interest as a means to facilitate an expressive non-verbal dialogue. When tied with studies grounded in psychology and choreographic theory, we consider the qualities of interaction that foster an elevated sense of social connectedness, non-contingent to occupying one’s personal space. Upon reflection of the newly adopted social distancing concept, we orchestrate a technological intervention, starting with interpersonal distance and sound at the core of interaction. Materialised as a set of sensory face-masks, a novel wearable system was developed and tested in the context of a live public performance from which we obtain the user’s individual perspectives and correlate this with patterns identified in the recorded data. We identify and discuss traits of the user’s behaviour that were accredited to the system’s influence and construct four fundamental design considerations for physically distanced sound interaction. The study concludes with essential technical reflections, accompanied by an adaptation for a pervasive sensory intervention that is finally deployed in an open public space.publishersversionpublishe

    EMI Spy: Harnessing electromagnetic interference for low-cost, rapid prototyping of proxemic interaction

    Get PDF
    We present a wearable system that uses ambient electromagnetic interference (EMI) as a signature to identify electronic devices and support proxemic interaction. We designed a low cost tool, called EMI Spy, and a software environment for rapid deployment and evaluation of ambient EMI-based interactive infrastructure. EMI Spy captures electromagnetic interference and delivers the signal to a user's mobile device or PC through either the device's wired audio input or wirelessly using Bluetooth. The wireless version can be worn on the wrist, communicating with the user;s mobile device in their pocket. Users are able to train the system in less than 1 second to uniquely identify displays in a 2-m radius around them, as well as to detect pointing at a distance and touching gestures on the displays in real-time. The combination of a low cost EMI logger and an open source machine learning tool kit allows developers to quickly prototype proxemic, touch-to-connect, and gestural interaction. We demonstrate the feasibility of mobile, EMI-based device and gesture recognition with preliminary user studies in 3 scenarios, achieving 96% classification accuracy at close range for 6 digital signage displays distributed throughout a building, and 90% accuracy in classifying pointing gestures at neighboring desktop LCD displays. We were able to distinguish 1- and 2-finger touching with perfect accuracy and show indications of a way to determine power consumption of a device via touch. Our system is particularly well-suited to temporary use in a public space, where the sensors could be distributed to support a popup interactive environment anywhere with electronic devices. By designing for low cost, mobile, flexible, and infrastructure-free deployment, we aim to enable a host of new proxemic interfaces to existing appliances and displays

    A Modular Approach for Synchronized Wireless Multimodal Multisensor Data Acquisition in Highly Dynamic Social Settings

    Full text link
    Existing data acquisition literature for human behavior research provides wired solutions, mainly for controlled laboratory setups. In uncontrolled free-standing conversation settings, where participants are free to walk around, these solutions are unsuitable. While wireless solutions are employed in the broadcasting industry, they can be prohibitively expensive. In this work, we propose a modular and cost-effective wireless approach for synchronized multisensor data acquisition of social human behavior. Our core idea involves a cost-accuracy trade-off by using Network Time Protocol (NTP) as a source reference for all sensors. While commonly used as a reference in ubiquitous computing, NTP is widely considered to be insufficiently accurate as a reference for video applications, where Precision Time Protocol (PTP) or Global Positioning System (GPS) based references are preferred. We argue and show, however, that the latency introduced by using NTP as a source reference is adequate for human behavior research, and the subsequent cost and modularity benefits are a desirable trade-off for applications in this domain. We also describe one instantiation of the approach deployed in a real-world experiment to demonstrate the practicality of our setup in-the-wild.Comment: 9 pages, 8 figures, Proceedings of the 28th ACM International Conference on Multimedia (MM '20), October 12--16, 2020, Seattle, WA, USA. First two authors contributed equall

    Measuring spatial and temporal features of physical interaction dynamics in the workplace

    Get PDF
    Human behavior unfolding through organisational life is a topic tackled from different disciplines, with emphasis on different aspects and with an overwhelming reliance on humans as observation instruments. Advances in pervasive technologies allow for the first time to capture and record location and time information behavior in real time, accurately, continuously and for multiparty events. This thesis concerns itself with the examination of the question: can these technologies provide insights into human behavior that current methods cannot? The way people use the buildings they work in, relate and physically interact with others, through time, is information that designers and managers make use of to create better buildings and better organisations. Current methods’ depiction of these issues - fairly static, discrete and short term, mostly dyadic - pales in comparison with the potential offered by location and time technologies. Or does it? Having found an organisation, where fifty-one workers each carried a tag sending out location and time information to one such system for six weeks, two parallel studies were conducted. One using current manual and other methods and the other the automated method developed in this thesis, both aiming to understand spatial and temporal characteristics of interpersonal behavior in the workplace. This new method is based on the concepts and measures of personal space and interaction distance that are used to define the mathematical boundaries of the behaviors subject of study, interaction and solo events. Outcome information from both methods is used to test hypotheses on some aspects of the spatial and temporal nature of knowledge work affected by interpersonal dynamics. This thesis proves that the data obtained through the technology can be converted in rich information on some aspects of workplace interaction dynamics offering unprecedented insights for designers and managers to produce better buildings and better organisations

    Developing a person guidance module for hospital robots

    Get PDF
    This dissertation describes the design and implementation of the Person Guidance Module (PGM) that enables the IWARD (Intelligent Robot Swarm for attendance, Recognition, Cleaning and delivery) base robot to offer route guidance service to the patients or visitors inside the hospital arena. One of the common problems encountered in huge hospital buildings today is foreigners not being able to find their way around in the hospital. Although there are a variety of guide robots currently existing on the market and offering a wide range of guidance and related activities, they do not fit into the modular concept of the IWARD project. The PGM features a robust and foolproof non-hierarchical sensor fusion approach of an active RFID, stereovision and cricket mote sensor for guiding a patient to the X-ray room, or a visitor to a patient’s ward in every possible scenario in a complex, dynamic and crowded hospital environment. Moreover, the speed of the robot can be adjusted automatically according to the pace of the follower for physical comfort using this system. Furthermore, the module performs these tasks in any unconstructed environment solely from a robot’s onboard perceptual resources in order to limit the hardware installation costs and therefore the indoor setting support. Similar comprehensive solution in one single platform has remained elusive in existing literature. The finished module can be connected to any IWARD base robot using quick-change mechanical connections and standard electrical connections. The PGM module box is equipped with a Gumstix embedded computer for all module computing which is powered up automatically once the module box is inserted into the robot. In line with the general software architecture of the IWARD project, all software modules are developed as Orca2 components and cross-complied for Gumstix’s XScale processor. To support standardized communication between different software components, Internet Communications Engine (Ice) has been used as middleware. Additionally, plug-and-play capabilities have been developed and incorporated so that swarm system is aware at all times of which robot is equipped with PGM. Finally, in several field trials in hospital environments, the person guidance module has shown its suitability for a challenging real-world application as well as the necessary user acceptance

    ConfLab: A Rich Multimodal Multisensor Dataset of Free-Standing Social Interactions in the Wild

    Full text link
    Recording the dynamics of unscripted human interactions in the wild is challenging due to the delicate trade-offs between several factors: participant privacy, ecological validity, data fidelity, and logistical overheads. To address these, following a 'datasets for the community by the community' ethos, we propose the Conference Living Lab (ConfLab): a new concept for multimodal multisensor data collection of in-the-wild free-standing social conversations. For the first instantiation of ConfLab described here, we organized a real-life professional networking event at a major international conference. Involving 48 conference attendees, the dataset captures a diverse mix of status, acquaintance, and networking motivations. Our capture setup improves upon the data fidelity of prior in-the-wild datasets while retaining privacy sensitivity: 8 videos (1920x1080, 60 fps) from a non-invasive overhead view, and custom wearable sensors with onboard recording of body motion (full 9-axis IMU), privacy-preserving low-frequency audio (1250 Hz), and Bluetooth-based proximity. Additionally, we developed custom solutions for distributed hardware synchronization at acquisition, and time-efficient continuous annotation of body keypoints and actions at high sampling rates. Our benchmarks showcase some of the open research tasks related to in-the-wild privacy-preserving social data analysis: keypoints detection from overhead camera views, skeleton-based no-audio speaker detection, and F-formation detection.Comment: v2 is the version submitted to Neurips 2022 Datasets and Benchmarks Trac
    corecore