8,925 research outputs found

    Ambient Gestures

    No full text
    We present Ambient Gestures, a novel gesture-based system designed to support ubiquitous ‘in the environment’ interactions with everyday computing technology. Hand gestures and audio feedback allow users to control computer applications without reliance on a graphical user interface, and without having to switch from the context of a non-computer task to the context of the computer. The Ambient Gestures system is composed of a vision recognition software application, a set of gestures to be processed by a scripting application and a navigation and selection application that is controlled by the gestures. This system allows us to explore gestures as the primary means of interaction within a multimodal, multimedia environment. In this paper we describe the Ambient Gestures system, define the gestures and the interactions that can be achieved in this environment and present a formative study of the system. We conclude with a discussion of our findings and future applications of Ambient Gestures in ubiquitous computing

    User-centered design of a dynamic-autonomy remote interaction concept for manipulation-capable robots to assist elderly people in the home

    Get PDF
    In this article, we describe the development of a human-robot interaction concept for service robots to assist elderly people in the home with physical tasks. Our approach is based on the insight that robots are not yet able to handle all tasks autonomously with sufficient reliability in the complex and heterogeneous environments of private homes. We therefore employ remote human operators to assist on tasks a robot cannot handle completely autonomously. Our development methodology was user-centric and iterative, with six user studies carried out at various stages involving a total of 241 participants. The concept is under implementation on the Care-O-bot 3 robotic platform. The main contributions of this article are (1) the results of a survey in form of a ranking of the demands of elderly people and informal caregivers for a range of 25 robot services, (2) the results of an ethnography investigating the suitability of emergency teleassistance and telemedical centers for incorporating robotic teleassistance, and (3) a user-validated human-robot interaction concept with three user roles and corresponding three user interfaces designed as a solution to the problem of engineering reliable service robots for home environments

    Assessing the effectiveness of direct gesture interaction for a safety critical maritime application

    Get PDF
    Multi-touch interaction, in particular multi-touch gesture interaction, is widely believed to give a more natural interaction style. We investigated the utility of multi-touch interaction in the safety critical domain of maritime dynamic positioning (DP) vessels. We conducted initial paper prototyping with domain experts to gain an insight into natural gestures; we then conducted observational studies aboard a DP vessel during operational duties and two rounds of formal evaluation of prototypes - the second on a motion platform ship simulator. Despite following a careful user-centred design process, the final results show that traditional touch-screen button and menu interaction was quicker and less erroneous than gestures. Furthermore, the moving environment accentuated this difference and we observed initial use problems and handedness asymmetries on some multi-touch gestures. On the positive side, our results showed that users were able to suspend gestural interaction more naturally, thus improving situational awareness

    Viewpoint manipulations for 3D visualizations of smart buildings

    Get PDF
    Abstract. This thesis covers the design and implementation of a new single-input viewpoint manipulation technique aimed for a specific use case. The design of the technique is made based on previous literature. The objective of the research is to assess whether a single-input viewpoint manipulation technique can be as efficient as a multi-input viewpoint manipulation technique when used for observing a three dimensional (3D) model of a smart building. After checking the existing literature about basics of viewpoint manipulation, it was decided to design a single-input viewpoint manipulation technique that can be used on a wide range of hardware including touch screen devices not capable of multi-touch input and personal computers with a regular mouse. A 3D visualization of a nursing home was implemented to be viewed with the new technique. The nursing home in question is a smart house with sensors deployed in it, and sensor data is visualized in the 3D model. Aside from the new single-touch technique, a commonly used multi-touch technique was also implemented in order to compare the single-touch technique against it. Participants were recruited and user tests were made to find issues with the system. The yielded results indicate some clear points in the new technique that can be improved for future research.Tiivistelmä. Tämä työ kuvaa suunnittelu- ja implementaatioprosessin kuvakulmien manipulointiin tarkoitetulle uudenlaiselle yhden sormen (single-touch) syöttöjärjestelmälle. Suunnittelu pohjautuu aiempaan tutkimukseen. Tutkimuksen tarkoituksena on arvioida, onko yhden sormen järjestelmä yhtä tehokas monen sormen (multi-touch) järjestelmään verrattuna, kun kohteena on kolmiulotteinen (3D) malli älykkäästä rakennuksesta. Aiempiin tutkimuksiin nojaten yhden sormen järjestelmään päädyttiin, koska se tukisi suurempaa laitekantaa monen sormen järjestelmiin verrattuna. Työssä kehitettiin hoitokotia esittävä älyrakennuksen 3D-malli, jota käytettiin järjestelmän tarkastelemiseen. Kyseinen hoitokoti on anturointia sisältävä älykäs rakennus; 3D-mallia käytettiin anturidatan visualisoimiseen. Koejärjestelyissä käytettiin tavanomaista monen sormen järjestelmää vertailukohtana kehitettyyn järjestelmään. Vertailu tehtiin koehenkilöiden ja käyttäjätestien avulla. Tuloksista paljastui ominaisuuksia, joita tulisi parantaa järjestelmän tulevissa versioissa

    Animated virtual agents to cue user attention: comparison of static and dynamic deictic cues on gaze and touch responses

    Get PDF
    This paper describes an experiment developed to study the performance of virtual agent animated cues within digital interfaces. Increasingly, agents are used in virtual environments as part of the branding process and to guide user interaction. However, the level of agent detail required to establish and enhance efficient allocation of attention remains unclear. Although complex agent motion is now possible, it is costly to implement and so should only be routinely implemented if a clear benefit can be shown. Pevious methods of assessing the effect of gaze-cueing as a solution to scene complexity have relied principally on two-dimensional static scenes and manual peripheral inputs. Two experiments were run to address the question of agent cues on human-computer interfaces. Both experiments measured the efficiency of agent cues analyzing participant responses either by gaze or by touch respectively. In the first experiment, an eye-movement recorder was used to directly assess the immediate overt allocation of attention by capturing the participant’s eyefixations following presentation of a cueing stimulus. We found that a fully animated agent could speed up user interaction with the interface. When user attention was directed using a fully animated agent cue, users responded 35% faster when compared with stepped 2-image agent cues, and 42% faster when compared with a static 1-image cue. The second experiment recorded participant responses on a touch screen using same agent cues. Analysis of touch inputs confirmed the results of gaze-experiment, where fully animated agent made shortest time response with a slight decrease on the time difference comparisons. Responses to fully animated agent were 17% and 20% faster when compared with 2-image and 1-image cue severally. These results inform techniques aimed at engaging users’ attention in complex scenes such as computer games and digital transactions within public or social interaction contexts by demonstrating the benefits of dynamic gaze and head cueing directly on the users’ eye movements and touch responses

    Tacsel: Shape-Changing Tactile Screen applied for Eyes-Free Interaction in Cockpit

    Get PDF
    International audienceTouch screens have become widely used in recent years. Nowadays they have been integrated on numerous electronic devices for common use since they allow the user to interact with what is displayed on the screen. However, these technologies cannot be used in complex systems in which the visual attention is very limited (cockpit manipulation, driving tasks, etc.). This paper introduces the concept of Tacsel, the smaller dynamic element of a tactile screen. Tacsels allow shape-changing and flexible properties to touch screen devices providing eyes-free interaction. We developed a high-resolution prototype of Tacsel to demonstrate its technical feasibility and its potential within a cockpit context. Three interaction scenarios are described and a workshop with brainstorming and video-prototyping is conducted to evaluate the use of the proposed Tacsel in several cockpit tasks. Results showed that interactive Tacsels have a real potential for future cockpits. Several other possible applications are also described, and several advantages and limitations are discussed
    corecore