96,044 research outputs found

    Selection of touch gestures for children’s applications: repeated experiment to increase reliability

    Get PDF
    This paper discusses the selection of touch gestures for children’s applications. This research investigates the gestures that children aged between 2 to 4 years old can manage on the iPad device. Two experiments were conducted for this research. The first experiment was carried out in United Kingdom. The second experiment was carried out in Malaysia. The two similar experiments were carried out to increase the reliability and refine the result. This study shows that children aged 4 years have no problem using the 7 common gestures found in iPad applications. Some children aged 3 years have problem with two of the gestures. A high percentage of children aged 2 years struggled with the free rotate, drag & drop, pinch and spread gestures. This paper also discusses the Additional Criteria for the use of Gestures, Interface Design Components and Research on Children using iPad and Applications

    Building smart cameras on mobile tablets for hand gesture recognition

    Get PDF
    Mobile tablets have become very popular due to their portability and the wide diversity of the applications available. The touch screens and built-in accelerometers facilitate different forms of input instead of keyboards and mice. Nowadays, high-resolution cameras become a standard feature in a mobile device. Nevertheless, this camera is seldom considered to serve as a form of user inputs to the application, although similar technology is realized in some home entertainment systems. This paper describes our experience on making a smart camera on an iPad that can recognize pre-defined hand gestures. We study the time performance of performing image processing on an iPad. We find that due to the limited computational power of the mobile device, recognition results may not be available fast enough for a real-time application. We explore applying cloud computing to solve the problem. To the best of our knowledge, this is the first study on recognizing hand gestures on an iPad. Our results facilitate the development of a brand new type of applications that require smart cameras. © 2012 ACM.published_or_final_versio

    Meaningful Hand Gestures for Learning with Touch-based I.C.T.

    Get PDF
    The role of technology in educational contexts is becoming increasingly ubiquitous, with very few students and teachers able to engage in classroom learning activities without using some sort of Information Communication Technology (ICT). Touch-based computing devices in particular, such as tablets and smartphones, provide an intuitive interface where control and manipulation of content is possible using hand and finger gestures such as taps, swipes and pinches. Whilst these touch-based technologies are being increasingly adopted for classroom use, little is known about how the use of such gestures can support learning. The purpose of this study was to investigate how finger gestures used on a touch-based device could support learning

    A Software Development Kit for Camera-Based Gesture Interaction

    Get PDF
    Human-Computer Interaction is a rapidly expanding field, in which new implementations of ideas are consistently being released. In recent years, much of the concentration in this field has been on gesture-based control, either touch-based or camera-based. Even though camera-based gesture recognition was previously seen more in science fiction than in reality, this method of interaction is rising in popularity. There are a number of devices readily available to the average consumer that are designed to support this type of input, including the popular Microsoft Kinect and Leap Motion devices. Despite this rise in availability and popularity, development for these devices is currently an arduous task, unless only the most simple of gestures is required. The goal of this thesis is to develop a Software Development Kit (SDK) with which developers can more easily develop interfaces that utilize gesture-based control. If successful, this SDK could significantly reduce the amount of work (both in effort and in lines of code) necessary for a programmer to implement gesture control in an application. This, in turn, could help reduce the intellectual barrier which many face when attempting to implement a new interface. The developed SDK has three main goals. The SDK will place an emphasis on simplicity of code for developers using it; will allow for a variety of gestures, including gestures made by single or multiple trackable objects (e.g., hands and fingers), gestures performed in stages, and continuously-updating gestures; and will be device-agnostic, in that it will not be written exclusively for a single device. The thesis presents the results of a system validation study that suggests all of these goals have been met

    Application of support vector machines to detect hand and wrist gestures using a myoelectric armband

    Get PDF
    Farshid Amirabdollahian, Michael Walters, ‘Application of support vector machines to detect hand and wrist gestures using a myoelectric armband’, paper presented at the International conference on rehabilitation robotics: ICORR2017, London, UK, 17-21 July, 2017.The propose of this study was to assess the feasibility of using support vector machines in analysing myoelectric signals acquired using an off the shelf device, the Myo armband from Thalmic Lab. Background: With the technological advances in sensing human motion, and its potential to drive and control mechanical interfaces remotely or to be used as input interfaces, a multitude of input mechanisms are used to link actions between the human and the robot. In this study we explored the feasibility of using human arm’s myoelectric signals with the aim of identifying a number of gestures automatically. Material and methods: Participants (n = 26) took part in a study with the aim to assess the gesture detection accuracy using myoelectric signals. The Myo armband was used worn on the forearm. The session was divided into three phases, familiarisation: where participants learned how to use the armband, training: when participants reproduced a number of random gestures presented on screen to train our machine learning algorithm; and recognition: when gestures presented on screen were reproduced by participants, and simultaneously recognised using the machine learning routines. Support vector machines were used to train a model using participant training values, and to recognise gestures produced by the same participants. Different Kernel functions and electrode combinations were studied. Also we contrasted different lengths of training values versus different lengths for the recognition samples. Results: One participant did not complete the study due to technical errors during the session. The remaining (n = 25) participants completed the study allowing to calculate individual accuracy for grasp detection. The overall accuracy was 94.9% with data from 8 electrodes , and 72% where only four of the electrodes were used. The linear kernel outperformed the polynomial, and radial basis function. Exploring the number of training samples versus the achieved recognition accuracy, results identified acceptable accuracies (> 90%) for training around 3.5s, and recognising grasp episodes of around 0.2s long. The best recognised grasp was the hand closed (97.6%), followed by cylindrical grasp (96.8%), the lateral grasp (94%) and tripod (92%). Discussions: The recognition accuracy for the grasp performed is similar to our earlier work where a mechatronic device was used to perform, record and recognise these grasps. This is an interesting observation, as our previous effort in aligning the kinematic and biological signals had not found statistically significant links between the two. However, when the outcome of both is used as a label for identification, in this case gesture, it appears that machine learning is able to identify both kinematic and electrophysiological events with similar accuracy. Future work: The current study considers use of support vector machines for identifying human grasps based on myoelectric signals acquired from an off the shelf device. Due to the length of sessions in the experiment, we were only able to gather 5 seconds of training data and at a 50Hz sampling frequency. This provided us with limited amount of training data so we were not able to test shorter training times (< 2.5s). The device is capable of faster sampling, up to 200Hz and our future studies will benefit from this sampling rate and longer training sessions to explore if we can identify gestures using smaller amount of training data. These results allows us to progress to the next stage of work where the Myo armband is used in the context of robot-mediated stroke rehabilitation.Peer reviewedFinal Accepted Versio

    EMG-based eye gestures recognition for hands free interfacing

    Get PDF
    This study investigates the utilization of an Electromyography (EMG) based device to recognize five eye gestures and classify them to have a hands free interaction with different applications. The proposed eye gestures in this work includes Long Blinks, Rapid Blinks, Wink Right, Wink Left and finally Squints or frowns. The MUSE headband, which is originally a Brain Computer Interface (BCI) that measures the Electroencephalography (EEG) signals, is the device used in our study to record the EMG signals from behind the earlobes via two Smart rubber sensors and at the forehead via two other electrodes. The signals are considered as EMG once they involve the physical muscular stimulations, which are considered as artifacts for the EEG Brain signals for other studies. The experiment is conducted on 15 participants (12 Males and 3 Females) randomly as no specific groups were targeted and the session was video taped for reevaluation. The experiment starts with the calibration phase to record each gesture three times per participant through a developed Voice narration program to unify the test conditions and time intervals among all subjects. In this study, a dynamic sliding window with segmented packets is designed to faster process the data and analyze it, as well as to provide more flexibility to classify the gestures regardless their duration from one user to another. Additionally, we are using the thresholding algorithm to extract the features from all the gestures. The Rapid Blinks and the Squints were having high F1 Scores of 80.77% and 85.71% for the Trained Thresholds, as well as 87.18% and 82.12% for the Default or manually adjusted thresholds. The accuracies of the Long Blinks, Rapid Blinks and Wink Left were relatively higher with the manually adjusted thresholds, while the Squints and the Wink Right were better with the trained thresholds. However, more improvements were proposed and some were tested especially after monitoring the participants actions from the video recordings to enhance the classifier. Most of the common irregularities met are discussed within this study so as to pave the road for further similar studies to tackle them before conducting the experiments. Several applications need minimal physical or hands interactions and this study was originally a part of the project at HCI Lab, University of Stuttgart to make a hands-free switching between RGB, thermal and depth cameras integrated on or embedded in an Augmented Reality device designed for the firefighters to increase their visual capabilities in the field

    Tap 'N' Shake: Gesture-based Smartwatch-Smartphone Communications System

    Get PDF
    Smartwatches have recently seen a surge in popularity, and the new technology presents a number of interesting opportunities and challenges, many of which have not been adequately dealt with by existing applications. Current smartwatch messaging systems fail to adequately address the problem of smartwatches requiring two-handed interactions. This paper presents Tap 'n' Shake, a novel gesture-based messaging system for Android smartwatches and smartphones addressing the problem of two-handed interactions by utilising various motion-gestures within the applications. The results of a user evaluation carried out with sixteen subjects demonstrated the usefulness and usability of using gestures over two-handed interactions for smartwatches. Additionally, the study provides insight into the types of gestures that subjects preferred to use for various actions in a smartwatch-smartphone messaging system
    • …
    corecore