285 research outputs found

    Compact, configurable inertial gesture recognition

    Get PDF

    GyGSLA: A portable glove system for learning sign language alphabet

    Get PDF
    The communication between people with normal hearing with those having hearing or speech impairment is difficult. Learning a new alphabet is not always easy, especially when it is a sign language alphabet, which requires both hand skills and practice. This paper presents the GyGSLA system, standing as a completely portable setup created to help inexperienced people in the process of learning a new sign language alphabet. To achieve it, a computer/mobile game-interface and an hardware device, a wearable glove, were developed. When interacting with the computer or mobile device, using the wearable glove, the user is asked to represent alphabet letters and digits, by replicating the hand and fingers positions shown in a screen. The glove then sends the hand and fingers positions to the computer/mobile device using a wireless interface, which interprets the letter or digit that is being done by the user, and gives it a corresponding score. The system was tested with three completely inexperience sign language subjects, achieving a 76% average recognition ratio for the Portuguese sign language alphabet.info:eu-repo/semantics/publishedVersio

    MOCA: A Low-Power, Low-Cost Motion Capture System Based on Integrated Accelerometers

    Get PDF
    Human-computer interaction (HCI) and virtual reality applications pose the challenge of enabling real-time interfaces for natural interaction. Gesture recognition based on body-mounted accelerometers has been proposed as a viable solution to translate patterns of movements that are associated with user commands, thus substituting point-and-click methods or other cumbersome input devices. On the other hand, cost and power constraints make the implementation of a natural and efficient interface suitable for consumer applications a critical task. Even though several gesture recognition solutions exist, their use in HCI context has been poorly characterized. For this reason, in this paper, we consider a low-cost/low-power wearable motion tracking system based on integrated accelerometers called motion capture with accelerometers (MOCA) that we evaluated for navigation in virtual spaces. Recognition is based on a geometric algorithm that enables efficient and robust detection of rotational movements. Our objective is to demonstrate that such a low-cost and a low-power implementation is suitable for HCI applications. To this purpose, we characterized the system from both a quantitative point of view and a qualitative point of view. First, we performed static and dynamic assessment of movement recognition accuracy. Second, we evaluated the effectiveness of user experience using a 3D game application as a test bed

    Reducing the number of EMG electrodes during online hand gesture classification with changing wrist positions

    Get PDF
    Abstract Background Myoelectric control based on hand gesture classification can be used for effective, contactless human–machine interfacing in general applications (e.g., consumer market) as well as in the clinical context. However, the accuracy of hand gesture classification can be impacted by several factors including changing wrist position. The present study aimed at investigating how channel configuration (number and placement of electrode pads) affects performance in hand gesture recognition across wrist positions, with the overall goal of reducing the number of channels without the loss of performance with respect to the benchmark (all channels). Methods Matrix electrodes (256 channels) were used to record high-density EMG from the forearm of 13 healthy subjects performing a set of 8 gestures in 3 wrist positions and 2 force levels (low and moderate). A reduced set of channels was chosen by applying sequential forward selection (SFS) and simple circumferential placement (CIRC) and used for gesture classification with linear discriminant analysis. The classification success rate and task completion rate were the main outcome measures for offline analysis across the different number of channels and online control using 8 selected channels, respectively. Results The offline analysis demonstrated that good accuracy (> 90%) can be achieved with only a few channels. However, using data from all wrist positions required more channels to reach the same performance. Despite the targeted placement (SFS) performing similarly to CIRC in the offline analysis, the task completion rate [median (lower–upper quartile)] in the online control was significantly higher for SFS [71.4% (64.8–76.2%)] compared to CIRC [57.1% (51.8–64.8%), p < 0.01], especially for low contraction levels [76.2% (66.7–84.5%) for SFS vs. 57.1% (47.6–60.7%) for CIRC, p < 0.01]. For the reduced number of electrodes, the performance with SFS was comparable to that obtained when using the full matrix, while the selected electrodes were highly subject-specific. Conclusions The present study demonstrated that the number of channels required for gesture classification with changing wrist positions could be decreased substantially without loss of performance, if those channels are placed strategically along the forearm and individually for each subject. The results also emphasize the importance of online assessment and motivate the development of configurable matrix electrodes with integrated channel selection

    Establishing a Framework for the development of Multimodal Virtual Reality Interfaces with Applicability in Education and Clinical Practice

    Get PDF
    The development of Virtual Reality (VR) and Augmented Reality (AR) content with multiple sources of both input and output has led to countless contributions in a great many number of fields, among which medicine and education. Nevertheless, the actual process of integrating the existing VR/AR media and subsequently setting it to purpose is yet a highly scattered and esoteric undertaking. Moreover, seldom do the architectures that derive from such ventures comprise haptic feedback in their implementation, which in turn deprives users from relying on one of the paramount aspects of human interaction, their sense of touch. Determined to circumvent these issues, the present dissertation proposes a centralized albeit modularized framework that thus enables the conception of multimodal VR/AR applications in a novel and straightforward manner. In order to accomplish this, the aforesaid framework makes use of a stereoscopic VR Head Mounted Display (HMD) from Oculus Rift©, a hand tracking controller from Leap Motion©, a custom-made VR mount that allows for the assemblage of the two preceding peripherals and a wearable device of our own design. The latter is a glove that encompasses two core modules in its innings, one that is able to convey haptic feedback to its wearer and another that deals with the non-intrusive acquisition, processing and registering of his/her Electrocardiogram (ECG), Electromyogram (EMG) and Electrodermal Activity (EDA). The software elements of the aforementioned features were all interfaced through Unity3D©, a powerful game engine whose popularity in academic and scientific endeavors is evermore increasing. Upon completion of our system, it was time to substantiate our initial claim with thoroughly developed experiences that would attest to its worth. With this premise in mind, we devised a comprehensive repository of interfaces, amid which three merit special consideration: Brain Connectivity Leap (BCL), Ode to Passive Haptic Learning (PHL) and a Surgical Simulator

    Contrôle gestuel d'un robot en mobilité

    Get PDF
    International audienceCurrent robotics advances offer consequent potential for assistance, not only for individuals, but also for professionals operating on the ground, such as firemen or military. However, such operators must stay focused on their mission; hence they expect a non-invasive, intuitive control system, disrupting them as least as possible. Furthermore, the inherent mobility arising from such missions (walking, running, jumping) must not be an obstacle to gesture recognition. Here, we present a glove-based gesture detection algorithm, whose learning phase is instantaneous among a large span of possible gestures. It has been evaluated on a seven-word dictionary dedicated for the purpose of robot control, with satisfying recognition results.Les progrès actuels de la robotique offrent des opportunités considérables , pas seulement pour les particuliers mais également les professionnels en mission, tels que des pompiers ou militaires. Cependant, ce type d'opérateurs doit rester focalisé sur sa mission et attend donc un moyen de contrôle qui soit peu intrusif et tr es intuitif. De plus, il est indispensable de livrer un système qui prend en compte la mobilité intrinsèque à ce type d'activité (marche, course, saut). Nous présentons ici un système de reconnaissance gestuelle à base de gant de données , à l'apprentissage instantané sur une large gamme de gestes possibles. Nous l'avons évalué sur sept gestes dédiés au contrôle d'un robot mobile, avec des résultats tout à fait satisfaisants

    Control System in Open-Source FPGA for a Self-Balancing Robot

    Get PDF
    Computing in technological applications is typically performed with software running on general-purpose microprocessors, such as the Computer Processing Unit (CPU), or specific ones, like the Graphical Processing Unit (GPU). Application-Specific Integrated Circuits (ASICs) are an interesting option when speed and reliability are required, but development costs are usually high. Field-Programmable Gate Arrays (FPGA) combine the flexibility of software with the high-speed operation of hardware, and can keep costs low. The dominant FPGA infrastructure is proprietary, but open tools have greatly improved and are a growing trend, from which robotics can benefit. This paper presents a robotics application that was fully developed using open FPGA tools. An inverted pendulum robot was designed, built, and programmed using open FPGA tools, such as IceStudio and the IceZum Alhambra board, which integrates the iCE40HX4K-TQ144 from Lattice. The perception from an inertial sensor is used in a PD control algorithm that commands two DC motors. All the modules were synthesized in an FPGA as a proof of concept. Its experimental validation shows good behavior and performance.This work was partially funded by the Community of Madrid through the RoboCity2030-III project (S2013/MIT-2748) and by the Spanish Ministry of Economy and Competitiveness through the RETOGAR project (TIN2016-76515-R)
    • …
    corecore