613 research outputs found

    Intimate interfaces in action: assessing the usability and subtlety of emg-based motionless gestures

    No full text
    Mobile communication devices, such as mobile phones and networked personal digital assistants (PDAs), allow users to be constantly connected and communicate anywhere and at any time, often resulting in personal and private communication taking place in public spaces. This private -- public contrast can be problematic. As a remedy, we promote intimate interfaces: interfaces that allow subtle and minimal mobile interaction, without disruption of the surrounding environment. In particular, motionless gestures sensed through the electromyographic (EMG) signal have been proposed as a solution to allow subtle input in a mobile context. In this paper we present an expansion of the work on EMG-based motionless gestures including (1) a novel study of their usability in a mobile context for controlling a realistic, multimodal interface and (2) a formal assessment of how noticeable they are to informed observers. Experimental results confirm that subtle gestures can be profitably used within a multimodal interface and that it is difficult for observers to guess when someone is performing a gesture, confirming the hypothesis of subtlety

    A Piezoresistive Array Armband With Reduced Number of Sensors for Hand Gesture Recognition

    Get PDF
    Human machine interfaces (HMIs) are employed in a broad range of applications, spanning from assistive devices for disability to remote manipulation and gaming controllers. In this study, a new piezoresistive sensors array armband is proposed for hand gesture recognition. The armband encloses only three sensors targeting specific forearm muscles, with the aim to discriminate eight hand movements. Each sensor is made by a force-sensitive resistor (FSR) with a dedicated mechanical coupler and is designed to sense muscle swelling during contraction. The armband is designed to be easily wearable and adjustable for any user and was tested on 10 volunteers. Hand gestures are classified by means of different machine learning algorithms, and classification performances are assessed applying both, the 10-fold and leave-one-out cross-validations. A linear support vector machine provided 96% mean accuracy across all participants. Ultimately, this classifier was implemented on an Arduino platform and allowed successful control for videogames in real-time. The low power consumption together with the high level of accuracy suggests the potential of this device for exergames commonly employed for neuromotor rehabilitation. The reduced number of sensors makes this HMI also suitable for hand-prosthesis control

    Hand Gestures Recognition for Human-Machine Interfaces: A Low-Power Bio-Inspired Armband

    Get PDF
    Hand gesture recognition has recently increased its popularity as Human-Machine Interface (HMI) in the biomedical field. Indeed, it can be performed involving many different non-invasive techniques, e.g., surface ElectroMyoGraphy (sEMG) or PhotoPlethysmoGraphy (PPG). In the last few years, the interest demonstrated by both academia and industry brought to a continuous spawning of commercial and custom wearable devices, which tried to address different challenges in many application fields, from tele-rehabilitation to sign language recognition. In this work, we propose a novel 7-channel sEMG armband, which can be employed as HMI for both serious gaming control and rehabilitation support. In particular, we designed the prototype focusing on the capability of our device to compute the Average Threshold Crossing (ATC) parameter, which is evaluated by counting how many times the sEMG signal crosses a threshold during a fixed time duration (i.e., 130 ms), directly on the wearable device. Exploiting the event-driven characteristic of the ATC, our armband is able to accomplish the on-board prediction of common hand gestures requiring less power w.r.t. state of the art devices. At the end of an acquisition campaign that involved the participation of 26 people, we obtained an average classifier accuracy of 91.9% when aiming to recognize in real time 8 active hand gestures plus the idle state. Furthermore, with 2.92mA of current absorption during active functioning and 1.34mA prediction latency, this prototype confirmed our expectations and can be an appealing solution for long-term (up to 60 h) medical and consumer applications

    MirrorGen Wearable Gesture Recognition using Synthetic Videos

    Get PDF
    abstract: In recent years, deep learning systems have outperformed traditional machine learning systems in most domains. There has been a lot of research recently in the field of hand gesture recognition using wearable sensors due to the numerous advantages these systems have over vision-based ones. However, due to the lack of extensive datasets and the nature of the Inertial Measurement Unit (IMU) data, there are difficulties in applying deep learning techniques to them. Although many machine learning models have good accuracy, most of them assume that training data is available for every user while other works that do not require user data have lower accuracies. MirrorGen is a technique which uses wearable sensor data and generates synthetic videos using hand movements and it mitigates the traditional challenges of vision based recognition such as occlusion, lighting restrictions, lack of viewpoint variations, and environmental noise. In addition, MirrorGen allows for user-independent recognition involving minimal human effort during data collection. It also helps leverage the advances in vision-based recognition by using various techniques like optical flow extraction, 3D convolution. Projecting the orientation (IMU) information to a video helps in gaining position information of the hands. To validate these claims, we perform entropy analysis on various configurations such as raw data, stick model, hand model and real video. Human hand model is found to have an optimal entropy that helps in achieving user independent recognition. It also serves as a pervasive option as opposed to a video-based recognition. The average user independent recognition accuracy of 99.03% was achieved for a sign language dataset with 59 different users, 20 different signs with 20 repetitions each for a total of 23k training instances. Moreover, synthetic videos can be used to augment real videos to improve recognition accuracy.Dissertation/ThesisMasters Thesis Computer Science 201

    A low-cost, wireless, 3-D-printed custom armband for sEMG hand gesture recognition

    Get PDF
    Wearable technology can be employed to elevate the abilities of humans to perform demanding and complex tasks more efficiently. Armbands capable of surface electromyography (sEMG) are attractive and noninvasive devices from which human intent can be derived by leveraging machine learning. However, the sEMG acquisition systems currently available tend to be prohibitively costly for personal use or sacrifice wearability or signal quality to be more affordable. This work introduces the 3DC Armband designed by the Biomedical Microsystems Laboratory in Laval University; a wireless, 10-channel, 1000 sps, dry-electrode, low-cost ( 150 USD) myoelectric armband that also includes a 9-axis inertial measurement unit. The proposed system is compared with the Myo Armband by Thalmic Labs, one of the most popular sEMG acquisition systems. The comparison is made by employing a new offline dataset featuring 22 able-bodied participants performing eleven hand/wrist gestures while wearing the two armbands simultaneously. The 3DC Armband systematically and significantly (p < 0.05) outperforms the Myo Armband, with three different classifiers employing three different input modalities when using ten seconds or more of training data per gesture. This new dataset, alongside the source code, Altium project and 3-D models are made readily available for download within a Github repository

    Multi-Operator Gesture Control of Robotic Swarms Using Wearable Devices

    Get PDF
    The theory and design of effective interfaces for human interaction with multi-robot systems has recently gained significant interest. Robotic swarms are multi-robot systems where local interactions between robots and neighbors within their spatial neighborhood generate emergent collective behaviors. Most prior work has studied interfaces for human interaction with remote swarms, but swarms also have great potential in applications working alongside humans, motivating the need for interfaces for local interaction. Given the collective nature of swarms, human interaction may occur at many levels of abstraction ranging from swarm behavior selection to teleoperation. Wearable gesture control is an intuitive interaction modality that can meet this requirement while keeping operator hands usually unencumbered. In this paper, we present an interaction method using a gesture-based wearable device with a limited number of gestures for robust control of a complex system: a robotic swarm. Experiments conducted with a real robot swarm compare performance in single and two-operator conditions illustrating the effectiveness of the method. Results show human operators using our interaction method are able to successfully complete the task in all trials, illustrating the effectiveness of the method, with better performance in the two-operator condition, indicating separation of function is beneficial for our method. The primary contribution of our work is the development and demonstration of interaction methods that allow robust control of a difficult to understand multi robot system using only the noisy inputs typical of smartphones and other on-body sensor driven devices
    • 

    corecore