14 research outputs found

    BodySpace: inferring body pose for natural control of a music player

    Get PDF
    We describe the BodySpace system, which uses inertial sensing and pattern recognition to allow the gestural control of a music player by placing the device at different parts of the body. We demonstrate a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based techniques can shape gestural interaction

    Context-aware mobile applications design: implications and challeges for a new indusy

    Get PDF
    Context-aware computing is slowly becoming the new mobile paradigm in which applications can discover and use information “out and about”. Typical sources of knowledge about context are the device’s location, data about the environment at large, the mobile device’s prior activity log and even the user’s biometrics. The mobile industry agrees that this paradigm improves the appeal and value of applications by personalising and adapting them to the context in which they run. However, capturing contextual information and processing it to enhance or create a new application is a daunting task: it involves scattered systems and infrastructures and an increasingly wide array of heterogeneous data, architectures and technological tools. In this paper, we explore and analyse existing mobile context-aware applications and the proposed frameworks that enable them. The paper aims to clarify the echnological choices behind context-aware mobile applications and the challenges that still remain ahead for this area to fulfil the promises it offers

    A Gestalt Theoretic Perspective on the User Experience of Location-Based Services

    Get PDF

    Mobile Computing

    Get PDF

    Reach Media : on-the-move interaction with everyday objects

    Get PDF
    Thesis (S.M.)--Massachusetts Institute of Technology, School of Architecture and Planning, Program in Media Arts and Sciences, 2005.Includes bibliographical references (p. 66-67).Mobile and wearable interfaces try to integrate digital information into our everyday experiences but usually require more attention than is appropriate and often fail to do so in a natural and socially acceptable way. In this thesis we present "ReachMedia," a system for seamlessly providing just-in-time information for everyday objects. The system is built around a wireless wristband that detects objects that the user is interacting with and allows the use of gestures for interaction. Thus enables hands-and-eyes-free interfacing with relevant information using a unique combination of audio output and gestural input, allowing for socially acceptable, on-the-move interaction. We demonstrate that such an interaction is more natural to the user than existing mobile interfaces.by Assaf Feldman.S.M

    Glaze visualization framework for mobile devices

    Get PDF
    Supervisor: Ian Oakle

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment
    corecore