12 research outputs found

    Sélection et Contrôle à Distance d'Objets Physiques Augmentés

    No full text
    International audienceNotre recherche doctorale concerne l'interaction dans les environnements intelligents. Plus particulièrement, nous considérons la sélection et le contrôle à distance d'objets physiques augmentés. Nos objectifs sont à la fois conceptuels, par la mise en place d'un espace de conception mais aussi pratiques par la conception, le développement et l'évaluation de techniques d'interaction. Nos résultats ont permis de souligner où l'attention de l'utilisateur doit être pour la sélection efficace et plaisante des objets augmentés à travers la comparaison expérimentale de deux nouvelles techniques de sélection d'objets physiques : P2Roll et P2Slide. Les perspectives en vue de la complétude des travaux concernent principalement le contrôle d'objets et incluent (1) l'évaluation des techniques de guidage pour le contrôle gestuel des objets augmentés par un utilisateur novice, et (2) l'évaluation in situ des techniques conçues

    EgoViz – a Mobile Based Spatial Interaction System

    Get PDF
    This paper describes research carried out in the area of mobile spatial interaction and the development of a mobile (i.e. on-device) version of a simulated web-based 2D directional query processor. The TellMe application integrates location (from GPS, GSM, WiFi) and orientation (from digital compass/tilt sensors) sensing technologies into an enhanced spatial query processing module capable of exploiting a mobile device’s position and orientation for querying real-world 3D spatial datasets. This paper outlines the technique used to combine these technologies and the architecture needed to deploy them on a sensor enabled smartphone (i.e. Nokia 6210 Navigator). With all these sensor technologies now available on one device, it is possible to employ a personal query system that can work effectively in any environment using location and orientation as primary parameters for directional queries. In doing so, novel approaches for determining a user’s query space in 3 dimensions based on line-of-sight and 3D visibility (ego-visibility) are also investigated. The result is a mobile application that is location, direction and orientation aware and using these data is able to identify objects (e.g. buildings, points-of-interest, etc.) by pointing at them or when they are in a specified field-of-view

    Designing Disambiguation Techniques for Pointing in the Physical World

    Get PDF
    International audienceSeveral ways for selecting physical objects exist, including touching and pointing at them. Allowing the user to interact at a distance by pointing at physical objects can be challenging when the environment contains a large number of interactive physical objects, possibly occluded by other everyday items. Previous pointing techniques highlighted the need for disambiguation techniques. Addressing this challenge, this paper contributes a design space that organizes along groups and axes a set of options for designers to relevantly (1) describe, (2) classify, and (3) design disambiguation techniques. First, we have not found techniques in the literature yet that our design space could not describe. Second, all the techniques show a different path along the axes of our design space. Third, it allows defining of several new paths/solutions that have not yet been explored. We illustrate this generative power with the example of such a designed technique, Physical Pointing Roll (P2Roll)

    Mobile Pointing Task in the Physical World: Balancing Focus and Performance while Disambiguating

    Get PDF
    International audienceWe address the problem of mobile distal selection of physical objects when pointing at them in augmented environments. We focus on the disambiguation step needed when several objects are selected with a rough pointing gesture. A usual disambiguation technique forces the users to switch their focus from the physical world to a list displayed on a handheld device's screen. In this paper, we explore the balance between change of users' focus and performance. We present two novel interaction techniques allowing the users to maintain their focus in the physical world. Both use a cycling mechanism, respectively performed with a wrist rolling gesture for P2Roll or with a finger sliding gesture for P2Slide. A user experiment showed that keeping users' focus in the physical world outperforms techniques that require the users to switch their focus to a digital representation distant from the physical objects, when disambiguating up to 8 objects

    Mobile capture of remote points of interest using line of sight modelling

    Get PDF
    Recording points of interest using GPS whilst working in the field is an established technique in geographical fieldwork, where the user’s current position is used as the spatial reference to be captured; this is known as geo-tagging. We outline the development and evaluation of a smartphone application called Zapp that enables geo-tagging of any distant point on the visible landscape. The ability of users to log or retrieve information relating to what they can see, rather than where they are standing, allows them to record observations of points in the broader landscape scene, or to access descriptions of landscape features from any viewpoint. The application uses the compass orientation and tilt of the phone to provide data for a line of sight algorithm that intersects with a Digital Surface Model stored on the mobile device. We describe the development process and design decisions for Zapp present the results of a controlled study of the accuracy of the application, and report on the use of Zapp for a student field exercise. The studies indicate the feasibility of the approach, but also how the appropriate use of such techniques will be constrained by current levels of precision in mobile sensor technology. The broader implications for interactive query of the distant landscape and for remote data logging are discussed

    Mobile Visibility Querying for LBS

    Full text link

    EXTENDING INPUT RANGE THROUGH CLUTCHING : ANALYSIS, DESIGN, EVALUATION AND CASE STUDY

    Get PDF
    Master'sMASTER OF SCIENC

    Multimodal Content Delivery for Geo-services

    Get PDF
    This thesis describes a body of work carried out over several research projects in the area of multimodal interaction for location-based services. Research in this area has progressed from using simulated mobile environments to demonstrate the visual modality, to the ubiquitous delivery of rich media using multimodal interfaces (geo- services). To effectively deliver these services, research focused on innovative solutions to real-world problems in a number of disciplines including geo-location, mobile spatial interaction, location-based services, rich media interfaces and auditory user interfaces. My original contributions to knowledge are made in the areas of multimodal interaction underpinned by advances in geo-location technology and supported by the proliferation of mobile device technology into modern life. Accurate positioning is a known problem for location-based services, contributions in the area of mobile positioning demonstrate a hybrid positioning technology for mobile devices that uses terrestrial beacons to trilaterate position. Information overload is an active concern for location-based applications that struggle to manage large amounts of data, contributions in the area of egocentric visibility that filter data based on field-of-view demonstrate novel forms of multimodal input. One of the more pertinent characteristics of these applications is the delivery or output modality employed (auditory, visual or tactile). Further contributions in the area of multimodal content delivery are made, where multiple modalities are used to deliver information using graphical user interfaces, tactile interfaces and more notably auditory user interfaces. It is demonstrated how a combination of these interfaces can be used to synergistically deliver context sensitive rich media to users - in a responsive way - based on usage scenarios that consider the affordance of the device, the geographical position and bearing of the device and also the location of the device

    Facilitating Programming of Vision-Equipped Robots through Robotic Skills and Projection Mapping

    Get PDF
    corecore