341 research outputs found

    Variability in wrist-tilt accelerometer based gesture interfaces

    Get PDF
    In this paper we describe a study that examines human performance in a tilt control targeting task on a PDA. A three-degree of freedom accelerometer attached to the base of the PDA allows users to navigate to the targets by tilting their wrist in different directions. Post hoc analysis of performance data has been used to classify the ease of targeting and variability of movement in the different directions. The results show that there is an increase in variability of motions upwards from the centre, compared to downwards motions. Also the variability in the x axis component of the motion was greater than that in the y axis. This information can be used to guide designers as to the ease of various relative motions, and can be used to reshape the dynamics of the interaction to make each direction equally easy to achieve

    Interaction With Tilting Gestures In Ubiquitous Environments

    Full text link
    In this paper, we introduce a tilting interface that controls direction based applications in ubiquitous environments. A tilt interface is useful for situations that require remote and quick interactions or that are executed in public spaces. We explored the proposed tilting interface with different application types and classified the tilting interaction techniques. Augmenting objects with sensors can potentially address the problem of the lack of intuitive and natural input devices in ubiquitous environments. We have conducted an experiment to test the usability of the proposed tilting interface to compare it with conventional input devices and hand gestures. The experiment results showed greater improvement of the tilt gestures in comparison with hand gestures in terms of speed, accuracy, and user satisfaction.Comment: 13 pages, 10 figure

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Multimodal, Embodied and Location-Aware Interaction

    Get PDF
    This work demonstrates the development of mobile, location-aware, eyes-free applications which utilise multiple sensors to provide a continuous, rich and embodied interaction. We bring together ideas from the fields of gesture recognition, continuous multimodal interaction, probability theory and audio interfaces to design and develop location-aware applications and embodied interaction in both a small-scale, egocentric body-based case and a large-scale, exocentric `world-based' case. BodySpace is a gesture-based application, which utilises multiple sensors and pattern recognition enabling the human body to be used as the interface for an application. As an example, we describe the development of a gesture controlled music player, which functions by placing the device at different parts of the body. We describe a new approach to the segmentation and recognition of gestures for this kind of application and show how simulated physical model-based interaction techniques and the use of real world constraints can shape the gestural interaction. GpsTunes is a mobile, multimodal navigation system equipped with inertial control that enables users to actively explore and navigate through an area in an augmented physical space, incorporating and displaying uncertainty resulting from inaccurate sensing and unknown user intention. The system propagates uncertainty appropriately via Monte Carlo sampling and output is displayed both visually and in audio, with audio rendered via granular synthesis. We demonstrate the use of uncertain prediction in the real world and show that appropriate display of the full distribution of potential future user positions with respect to sites-of-interest can improve the quality of interaction over a simplistic interpretation of the sensed data. We show that this system enables eyes-free navigation around set trajectories or paths unfamiliar to the user for varying trajectory width and context. We demon- strate the possibility to create a simulated model of user behaviour, which may be used to gain an insight into the user behaviour observed in our field trials. The extension of this application to provide a general mechanism for highly interactive context aware applications via density exploration is also presented. AirMessages is an example application enabling users to take an embodied approach to scanning a local area to find messages left in their virtual environment

    Mobile phone interaction techniques for rural economy development - a review

    Get PDF
    Rural communities, especially in developing countries, are often neglected in terms of facilities and services that aid their social and economic development. This is evident even in software development processes, in that these groups of users or potential users’ are often not taken into consideration. The resultant effect is that they may not use it or use it sparingly. The objective of this study is to identify the various researches on interaction techniques and user interface design as a first step to the design of suitable mobile interactions and user interfaces for rural users. This research project is also aimed at socio-economic development and adding value to mobile phone users in Dwesa, a rural community in South Africa. This paper presents a literature survey of interaction techniques and user-interfaces. An analysis of the interaction techniques with respect to their suitability, availability of technologies, user capabilities for implementation in a rural context is discussed. Descriptive statistics of users’ current phones interaction facilities in the rural community which briefly illustrates users’ experiences and capabilities in different interaction modes is also presented.KEY WORDS: Interaction Techniques, Mobile phone, User Interface, ICT, Rural Development

    An Arm-Mounted Accelerometer and Gyro-Based 3D Control System

    Get PDF
    This thesis examines the performance of a wearable accelerometer/gyroscope-based system for capturing arm motions in 3D. Two experiments conforming to ISO 9241-9 specifications for non-keyboard input devices were performed. The first, modeled after the Fitts' law paradigm described in ISO 9241-9, utilized the wearable system to control a telemanipulator compared with joystick control and the user's arm. The throughputs were 5.54 bits/s, 0.74 bits/s and 0.80 bits/s, respectively. The second experiment utilized the wearable system to control a cursor in a 3D fish-tank virtual reality setup. The participants performed a 3D Fitts' law task with three selection methods: button clicks, dwell, and a twist gesture. Error rates were 6.82 %, 0.00% and 3.59 % respectively. Throughput ranged from 0.8 to 1.0 bits/s. The thesis includes detailed analyses on lag and other issues that present user interface challenges for systems that employ human-mounted sensor inputs to control a telemanipulator apparatus

    Sensor-based user interface concepts for continuous, around-device and gestural interaction on mobile devices

    Get PDF
    A generally observable trend of the past 10 years is that the amount of sensors embedded in mobile devices such as smart phones and tablets is rising steadily. Arguably, the available sensors are mostly underutilized by existing mobile user interfaces. In this dissertation, we explore sensor-based user interface concepts for mobile devices with the goal of making better use of the available sensing capabilities on mobile devices as well as gaining insights on the types of sensor technologies that could be added to future mobile devices. We are particularly interested how novel sensor technologies could be used to implement novel and engaging mobile user interface concepts. We explore three particular areas of interest for research into sensor-based user interface concepts for mobile devices: continuous interaction, around-device interaction and motion gestures. For continuous interaction, we explore the use of dynamic state-space systems to implement user interfaces based on a constant sensor data stream. In particular, we examine zoom automation in tilt-based map scrolling interfaces. We show that although fully automatic zooming is desirable in certain situations, adding a manual override capability of the zoom level (Semi-Automatic Zooming) will increase the usability of such a system, as shown through a decrease in task completion times and improved user ratings of user study. The presented work on continuous interaction also highlights how the sensors embedded in current mobile devices can be used to support complex interaction tasks. We go on to introduce the concept of Around-Device Interaction (ADI). By extending the interactive area of the mobile device to its entire surface and the physical volume surrounding it we aim to show how the expressivity and possibilities of mobile input can be improved this way. We derive a design space for ADI and evaluate three prototypes in this context. HoverFlow is a prototype allowing coarse hand gesture recognition around a mobile device using only a simple set of sensors. PalmSpace a prototype exploring the use of depth cameras on mobile devices to track the user's hands in direct manipulation interfaces through spatial gestures. Lastly, the iPhone Sandwich is a prototype supporting dual-sided pressure-sensitive multi-touch interaction. Through the results of user studies, we show that ADI can lead to improved usability for mobile user interfaces. Furthermore, the work on ADI contributes suggestions for the types of sensors could be incorporated in future mobile devices to expand the input capabilities of those devices. In order to broaden the scope of uses for mobile accelerometer and gyroscope data, we conducted research on motion gesture recognition. With the aim of supporting practitioners and researchers in integrating motion gestures into their user interfaces at early development stages, we developed two motion gesture recognition algorithms, the $3 Gesture Recognizer and Protractor 3D that are easy to incorporate into existing projects, have good recognition rates and require a low amount of training data. To exemplify an application area for motion gestures, we present the results of a study on the feasibility and usability of gesture-based authentication. With the goal of making it easier to connect meaningful functionality with gesture-based input, we developed Mayhem, a graphical end-user programming tool for users without prior programming skills. Mayhem can be used to for rapid prototyping of mobile gestural user interfaces. The main contribution of this dissertation is the development of a number of novel user interface concepts for sensor-based interaction. They will help developers of mobile user interfaces make better use of the existing sensory capabilities of mobile devices. Furthermore, manufacturers of mobile device hardware obtain suggestions for the types of novel sensor technologies that are needed in order to expand the input capabilities of mobile devices. This allows the implementation of future mobile user interfaces with increased input capabilities, more expressiveness and improved usability

    Exploring the Potential of Wrist-Worn Gesture Sensing

    Get PDF
    This thesis aims to explore the potential of wrist-worn gesture sensing. There has been a large amount of work on gesture recognition in the past utilizing different kinds of sensors. However, gesture sets tested across different work were all different, making it hard to compare them. Also, there has not been enough work on understanding what types of gestures are suitable for wrist-worn devices. Our work addresses these two problems and makes two main contributions compared to previous work: the specification of larger gesture sets, which were verified through an elicitation study generated by combining previous work; and an evaluation of the potential of gesture sensing with wrist-worn sensors. We developed a gesture recognition system, WristRec, which is a low-cost wrist-worn device utilizing bend sensors for gesture recognition. The design of WristRec aims to measure the tendon movement at the wrist while people perform gestures. We conducted a four-part study to verify the validity of the approach and the extent of gestures which can be detected using a wrist-worn system. During the initial stage, we verified the feasibility of WristRec using the Dynamic Time Warping (DTW) algorithm to perform gesture classification on a group of 5 gestures, the gesture set of the MYO armband. Next, we conducted an elicitation study to understand the trade-offs between hand, wrist, and arm gestures. The study helped us understand the type of gestures which wrist-worn system should be able to recognize. It also served as the base of our gesture set and our evaluation on the gesture sets used in the previous research. To evaluate the overall potential of wrist-worn recognition, we explored the design of hardware to recognize gestures by contrasting an Inertial measurement unit (IMU) only recognizer (the Serendipity system of Wen et al.) with our system. We assessed accuracies on a consensus gesture set and on a 27-gesture referent set, both extracted from the result of our elicitation study. Finally, we discuss the implications of our work both to the comparative evaluation of systems and to the design of enhanced hardware sensing

    Robust and Deployable Gesture Recognition for Smartwatches

    Get PDF
    Funding Information: This work was supported by the Department of Communications and Networking – Aalto University, Finnish Center for Artificial Intelligence (FCAI) and the Academy of Finland projects Human Automata (Project ID: 328813), BAD (Project ID: 318559), Huawei Technologies, and the Horizon 2020 FET program of the European Union (grant CHIST-ERA-20-BCI-001). Publisher Copyright: © 2022 ACM. Open Access fee has been paid, but the PDF version does not contain information on OA licence.Gesture recognition on smartwatches is challenging not only due to resource constraints but also due to the dynamically changing conditions of users. It is currently an open problem how to engineer gesture recognisers that are robust and yet deployable on smartwatches. Recent research has found that common everyday events, such as a user removing and wearing their smartwatch again, can deteriorate recognition accuracy significantly. In this paper, we suggest that prior understanding of causes behind everyday variability and false positives should be exploited in the development of recognisers. To this end, first, we present a data collection method that aims at diversifying gesture data in a representative way, in which users are taken through experimental conditions that resemble known causes of variability (e.g., walking while gesturing) and are asked to produce deliberately varied, but realistic gestures. Secondly, we review known approaches in machine learning for recogniser design on constrained hardware. We propose convolution-based network variations for classifying raw sensor data, achieving greater than 98% accuracy reliably under both individual and situational variations where previous approaches have reported significant performance deterioration. This performance is achieved with a model that is two orders of magnitude less complex than previous state-of-the-art models. Our work suggests that deployable and robust recognition is feasible but requires systematic efforts in data collection and network design to address known causes of gesture variability.Peer reviewe

    The Speckled Cellist: Classification of Cello Bowing Styles using the Orient Specks

    Get PDF
    Cello bowing techniques are classified by applying supervised machine learning methods to sensor data from two inertial sensors called the Orient specks – one worn on the playing wrist and the other attached to the frog of the bow. Twelve different bowing techniques were considered, including variants on a single string and across multiple strings. Results are presented for the classification of these twelve techniques when played singly, and in combination during improvisational play. The results demonstrated that even when limited to two sensors, classification accuracy in excess of 95% was obtained for the individual bowing styles, with the added advantages of a minimalist approach
    • …
    corecore