74,483 research outputs found

    Piano Pedaller: A Measurement System for Classification and Visualisation of Piano Pedalling Techniques

    Get PDF
    date-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfdate-added: 2017-12-22 18:53:42 +0000 date-modified: 2017-12-22 19:03:05 +0000 keywords: piano gesture recognition, optical sensor, real-time data acquisition, bela, music informatics local-url: https://pdfs.semanticscholar.org/fd00/fcfba2f41a3f182d2000ca4c05fb2b01c475.pdf publisher-url: http://homes.create.aau.dk/dano/nime17/ bdsk-url-1: http://www.nime.org/proceedings/2017/nime2017_paper0062.pdfThis paper presents the results of a study of piano pedalling techniques on the sustain pedal using a newly designed measurement system named Piano Pedaller. The system is comprised of an optical sensor mounted in the piano pedal bearing block and an embedded platform for recording audio and sensor data. This enables recording the pedalling gesture of real players and the piano sound under normal playing conditions. Using the gesture data collected from the system, the task of classifying these data by pedalling technique was undertaken using a Support Vector Machine (SVM). Results can be visualised in an audio based score following application to show pedalling together with the player’s position in the score

    End-to-End Multiview Gesture Recognition for Autonomous Car Parking System

    Get PDF
    The use of hand gestures can be the most intuitive human-machine interaction medium. The early approaches for hand gesture recognition used device-based methods. These methods use mechanical or optical sensors attached to a glove or markers, which hinders the natural human-machine communication. On the other hand, vision-based methods are not restrictive and allow for a more spontaneous communication without the need of an intermediary between human and machine. Therefore, vision gesture recognition has been a popular area of research for the past thirty years. Hand gesture recognition finds its application in many areas, particularly the automotive industry where advanced automotive human-machine interface (HMI) designers are using gesture recognition to improve driver and vehicle safety. However, technology advances go beyond active/passive safety and into convenience and comfort. In this context, one of America’s big three automakers has partnered with the Centre of Pattern Analysis and Machine Intelligence (CPAMI) at the University of Waterloo to investigate expanding their product segment through machine learning to provide an increased driver convenience and comfort with the particular application of hand gesture recognition for autonomous car parking. In this thesis, we leverage the state-of-the-art deep learning and optimization techniques to develop a vision-based multiview dynamic hand gesture recognizer for self-parking system. We propose a 3DCNN gesture model architecture that we train on a publicly available hand gesture database. We apply transfer learning methods to fine-tune the pre-trained gesture model on a custom-made data, which significantly improved the proposed system performance in real world environment. We adapt the architecture of the end-to-end solution to expand the state of the art video classifier from a single image as input (fed by monocular camera) to a multiview 360 feed, offered by a six cameras module. Finally, we optimize the proposed solution to work on a limited resources embedded platform (Nvidia Jetson TX2) that is used by automakers for vehicle-based features, without sacrificing the accuracy robustness and real time functionality of the system

    Towards a high accuracy wearable hand gesture recognition system using EIT

    Get PDF
    This paper presents a high accuracy hand gesture recognition system based on electrical impedance tomography (EIT). The system interfaces the forearm using a wrist wrap with embedded electrodes. It measures the inner conductivity distributions caused by bone and muscle movement of the forearm in real-time and passes the data to a deep learning neural network for gesture recognition. The system has an EIT bandwidth of 500 kHz and a measured sensitivity in excess of 6.4 Ω per frame. Nineteen hand gestures are designed for recognition, and with the proposed round robin sub-grouping method, an accuracy of over 98% is achieved

    Towards Full-Body Gesture Analysis and Recognition

    Get PDF
    With computers being embedded in every walk of our life, there is an increasing demand forintuitive devices for human-computer interaction. As human beings use gestures as importantmeans of communication, devices based on gesture recognition systems will be effective for humaninteraction with computers. However, it is very important to keep such a system as non-intrusive aspossible, to reduce the limitations of interactions. Designing such non-intrusive, intuitive, camerabasedreal-time gesture recognition system has been an active area of research research in the fieldof computer vision.Gesture recognition invariably involves tracking body parts. We find many research works intracking body parts like eyes, lips, face etc. However, there is relatively little work being done onfull body tracking. Full-body tracking is difficult because it is expensive to model the full-body aseither 2D or 3D model and to track its movements.In this work, we propose a monocular gesture recognition system that focuses on recognizing a setof arm movements commonly used to direct traffic, guiding aircraft landing and for communicationover long distances. This is an attempt towards implementing gesture recognition systems thatrequire full body tracking, for e.g. an automated recognition semaphore flag signaling system.We have implemented a robust full-body tracking system, which forms the backbone of ourgesture analyzer. The tracker makes use of two dimensional link-joint (LJ) model, which representsthe human body, for tracking. Currently, we track the movements of the arms in a video sequence,however we have future plans to make the system real-time. We use distance transform techniquesto track the movements by fitting the parameters of LJ model in every frames of the video captured.The tracker\u27s output is fed a to state-machine which identifies the gestures made. We haveimplemented this system using four sub-systems. Namely1. Background subtraction sub-system, using Gaussian models and median filters.2. Full-body Tracker, using L-J Model APIs3. Quantizer, that converts tracker\u27s output into defined alphabets4. Gesture analyzer, that reads the alphabets into action performed.Currently, our gesture vocabulary contains gestures involving arms moving up and down which canbe used for detecting semaphore, flag signaling system. Also we can detect gestures like clappingand waving of arms

    Agile gesture recognition for capacitive sensing devices: adapting on-the-job

    Get PDF
    Automated hand gesture recognition has been a focus of the AI community for decades. Traditionally, work in this domain revolved largely around scenarios assuming the availability of the flow of images of the user hands. This has partly been due to the prevalence of camera-based devices and the wide availability of image data. However, there is growing demand for gesture recognition technology that can be implemented on low-power devices using limited sensor data instead of high-dimensional inputs like hand images. In this work, we demonstrate a hand gesture recognition system and method that uses signals from capacitive sensors embedded into the etee hand controller. The controller generates real-time signals from each of the wearer five fingers. We use a machine learning technique to analyse the time series signals and identify three features that can represent 5 fingers within 500 ms. The analysis is composed of a two stage training strategy, including dimension reduction through principal component analysis and classification with K nearest neighbour. Remarkably, we found that this combination showed a level of performance which was comparable to more advanced methods such as supervised variational autoencoder. The base system can also be equipped with the capability to learn from occasional errors by providing it with an additional adaptive error correction mechanism. The results showed that the error corrector improve the classification performance in the base system without compromising its performance. The system requires no more than 1 ms of computing time per input sample, and is smaller than deep neural networks, demonstrating the feasibility of agile gesture recognition systems based on this technology.Depto. de Análisis Matemático y Matemática AplicadaFac. de Ciencias MatemáticasFALSEunpu

    Gesture recognition using mobile phone's inertial sensors

    Get PDF
    The availability of inertial sensors embedded in mobile devices has enabled a new type of interaction based on the movements or “gestures” made by the users when holding the device. In this paper we propose a gesture recognition system for mobile devices based on accelerometer and gyroscope measurements. The system is capable of recognizing a set of predefined gestures in a user-independent way, without the need of a training phase. Furthermore, it was designed to be executed in real-time in resource-constrained devices, and therefore has a low computational complexity. The performance of the system is evaluated offline using a dataset of gestures, and also online, through some user tests with the system running in a smart phone

    Advanced Interfaces for HMI in Hand Gesture Recognition

    Get PDF
    The present thesis investigates techniques and technologies for high quality Human Machine Interfaces (HMI) in biomedical applications. Starting from a literature review and considering market SoA in this field, the thesis explores advanced sensor interfaces, wearable computing and machine learning techniques for embedded resource-constrained systems. The research starts from the design and implementation of a real-time control system for a multifinger hand prosthesis based on pattern recognition algorithms. This system is capable to control an artificial hand using a natural gesture interface, considering the challenges related to the trade-off between responsiveness, accuracy and light computation. Furthermore, the thesis addresses the challenges related to the design of a scalable and versatile system for gesture recognition with the integration of a novel sensor interface for wearable medical and consumer application

    Convolutional Neural Networks for Speech Controlled Prosthetic Hands

    Full text link
    Speech recognition is one of the key topics in artificial intelligence, as it is one of the most common forms of communication in humans. Researchers have developed many speech-controlled prosthetic hands in the past decades, utilizing conventional speech recognition systems that use a combination of neural network and hidden Markov model. Recent advancements in general-purpose graphics processing units (GPGPUs) enable intelligent devices to run deep neural networks in real-time. Thus, state-of-the-art speech recognition systems have rapidly shifted from the paradigm of composite subsystems optimization to the paradigm of end-to-end optimization. However, a low-power embedded GPGPU cannot run these speech recognition systems in real-time. In this paper, we show the development of deep convolutional neural networks (CNN) for speech control of prosthetic hands that run in real-time on a NVIDIA Jetson TX2 developer kit. First, the device captures and converts speech into 2D features (like spectrogram). The CNN receives the 2D features and classifies the hand gestures. Finally, the hand gesture classes are sent to the prosthetic hand motion control system. The whole system is written in Python with Keras, a deep learning library that has a TensorFlow backend. Our experiments on the CNN demonstrate the 91% accuracy and 2ms running time of hand gestures (text output) from speech commands, which can be used to control the prosthetic hands in real-time.Comment: 2019 First International Conference on Transdisciplinary AI (TransAI), Laguna Hills, California, USA, 2019, pp. 35-4
    • …
    corecore