14 research outputs found

    Recognition of gestures through artificial intelligence techniques

    Get PDF
    El reconocimiento de gestos consiste en la interpretación de secuencias de acciones humanas captadas por cualquier tipo de sensor, ya sea táctil o no requiera de contacto alguno con el dispositivo, como una cámara. En las últimas décadas ha experimentado un gran avance debido al auge de la Inteligencia Artificial y al desarrollo de sensores cada vez más complejos y precisos. Un ejemplo concreto ha sido la publicación y el mantenimiento de un SDK oficinal de Microsoft Kinect, con el que los desarrolladores han podido acceder a las capacidades de esta cámara para crear interfaces de usuario más naturales e intuitivas. También se ha incentivado el uso de aplicaciones que van más allá de la industria del entretenimiento, como aquellas que asisten en los cuidados médicos o que permiten la automatización de tareas rutinarias. Es por ello que en este proyecto hemos desarrollado un conjunto de herramientas para la generación de modelos de aprendizaje capaces de reconocer gestos personalizados para la Kinect v2. El conjunto de herramientas que se ha diseñado e implementado está orientado a facilitar la tarea completa de reconocimiento para cualquier gesto, comenzando con la captura de los ejemplos de entrenamiento, continuando con el pre-procesado y el tratamiento de los datos, y finalizando con la generación de modelos de aprendizaje mediante técnicas de aprendizaje automático. Finalmente, para evaluar el funcionamiento de la plataforma se ha propuesto y ejecutado una experimentación con un gesto sencillo. Los resultados positivos motivan el empleo de las herramientas desarrolladas para incorporar reconocedores de gestos en cualquier aplicación que utilice el sensor Kinect v2.The gesture recognition consists of the interpretation of sequences of human actions captured by any type of sensor either touchable or non-touchable like a camera. It has experimented a high progress in the last decades, due to the rise of the Artificial Intelligence and the development of more complex and precise sensors. One example of this advances was the publish and maintenance of an official SDK of Microsoft Kinect, which were used by developers to access to the capabilities of this camera, so they could create more natural and intuitive applications. This has motivated the use of applications that go beyond the entertainment industry, like those which assists in healthcare or automate routine tasks. For that reason, this project develops a set of tools for the generation of learning models that are able to recognize personalized gestures for Kinect v2. The set of designed and implemented tools is oriented to ease the task of the recognition of any gesture, starting in the capturing of training examples, continuing with the pre-processing and the treatment of data, and ending with the generation of the recognition models trough machine learning techniques. Finally, in order to test the functionality of the complete system, an experimentation with a simple gesture has been proposed and executed. The positive results motivate to use the set of developed tools to incorporate gesture recognizers in any application that uses the Kinect v2 sensor.Ingeniería Informática (Plan 2011

    STUDY OF HAND GESTURE RECOGNITION AND CLASSIFICATION

    Get PDF
    To recognize different hand gestures and achieve efficient classification to understand static and dynamic hand movements used for communications.Static and dynamic hand movements are first captured using gesture recognition devices including Kinect device, hand movement sensors, connecting electrodes, and accelerometers. These gestures are processed using hand gesture recognition algorithms such as multivariate fuzzy decision tree, hidden Markov models (HMM), dynamic time warping framework, latent regression forest, support vector machine, and surface electromyogram. Hand movements made by both single and double hands are captured by gesture capture devices with proper illumination conditions. These captured gestures are processed for occlusions and fingers close interactions for identification of right gesture and to classify the gesture and ignore the intermittent gestures. Real-time hand gestures recognition needs robust algorithms like HMM to detect only the intended gesture. Classified gestures are then compared for the effectiveness with training and tested standard datasets like sign language alphabets and KTH datasets. Hand gesture recognition plays a very important role in some of the applications such as sign language recognition, robotics, television control, rehabilitation, and music orchestration

    A Fuzzy Logic Architecture for Rehabilitation Robotic Systems

    Get PDF
    Robots are highly incorporated in rehabilitation in the last decade to compensate lost functions in disabled individuals. By controlling the rehabilitation robots from far, many benefits are achieved. These benefits include but not restricted to minimum hospital stays, decreasing cost, and increasing the level of care. The main goal of this work is to have an effective solution to take care of patients from far. Tackling the problem of the remote control of rehabilitation robots is undergoing and highly challenging. In this paper, a remote wrist rehabilitation system is presented. The developed system is a sophisticated robot ensuring the two wrist movements (Flexion /extension and abduction/adduction). Additionally, the proposed system provides a software interface enabling the physiotherapists to control the rehabilitation process remotely. The patient’s safety during the therapy is achieved through the integration of a fuzzy controller in the system control architecture. The fuzzy controller is employed to control the robot action according to the pain felt by the patient. By using fuzzy logic approach, the system can adapt effectively according to the patients’ conditions. The Queue Telemetry Transport Protocol (MQTT) is considered to overcome the latency during the human robot interaction. Based on a Kinect camera, the control technique is made gestural. The physiotherapist gestures are detected and transmitted to the software interface to be processed and be sent to the robot. The acquired measurements are recorded in a database that can be used later to monitor patient progress during the treatment protocol. The obtained experimental results show the effectiveness of the developed remote rehabilitation system

    Attention Based Residual Network for Micro-Gesture Recognition

    Get PDF
    Finger micro-gesture recognition is increasingly become an important part of human-computer interaction (HCI) in applications of augmented reality (AR) and virtual reality (VR) technologies. To push the boundary of microgesture recognition, a novel Holoscopic 3D Micro-Gesture Database (HoMG) was established for research purpose. HoMG has an image subset and a video subset. This paper is to demonstrate the result achieved on the image subset for Holoscopic Micro-Gesture Recognition Challenge 2018 (HoMGR 2018). The proposed method utilized the state-of-the-art residual network with an attention-involved design. In every block of the network, an attention branch is added to the output of the last convolution layer. The attention branch is designed to spotlight the finger micro-gesture and reduce the noise introduced from the wrist and background. With an extensive analysis on HoMG, the proposed model achieved a recognition accuracy of 80.5% on the validation set and 82.1% on the testing set

    Eldo-care: EEG with Kinect sensor based telehealthcare for the disabled and the elderly

    Get PDF
    Telehealthcare systems are nowadays becoming a massive daily helping kit for elderly and disabled people. By using the Kinect sensors, remote monitoring has become easy. Also, the sensors' data are useful for the further improvement of the device. In this paper, we have discussed our newly developed “Eldo-care” system. This system is designed for the assessment and management of diverse neurological illnesses. The telemedical system is developed to monitor the psycho-neurological condition. People with disabilities and the elderly frequently experience access issues to essential services. Researchers today are concentrating on rehabilitative technologies based on human-computer interfaces that are closer to social-emotional intelligence. The goal of the study is to help old and disabled persons with cognitive rehabilitation using machine learning techniques. Human brain activity is observed using electroencephalograms, while user movement is tracked using Kinect sensors. Chebyshev filter is used for feature extraction and noise reduction. Utilizing the autoencoder technique, categorization is carried out by a Convolutional neural network with an accuracy of 95% and higher based on transfer learning. A better quality of life for older and disabled persons will be attained through the application of the suggested system in real time. The proposed device is attached to the subject under monitoring

    Разработка средств сбора и логического анализа 3D-видеоданных на основе времяпролётной камеры и Акторного Пролога

    Get PDF
    Предложен подход к интеллектуальному 3D-видеонаблюдению на основе объектно-ориентированного логического программирования. В отличие от обычного 2D-видеонаблюдения, методы трёхмерного зрения обеспечивают надёжное распознавание частей тела, что делает возможным новые постановки задачи практическое применение методов анализа поведения людей в системах видеонаблюдения. Логический подход к интеллектуальному видеонаблюдению позволяет описывать сложное поведение людей на основе определений простых действий и поз. Цель данной работы заключается в реализации этих преимуществ логического подхода в области интеллектуального 3D-видеонаблюдения.Работа выполнена при поддержке РФФИ, грант № 16-29-09626-офи_м

    Gesture imitation and recognition using Kinect sensor and extreme learning machines

    Get PDF
    This study presents a framework that recognizes and imitates human upper-body motions in real time. The framework consists of two parts. In the first part, a transformation algorithm is applied to 3D human motion data captured by a Kinect. The data are then converted into the robot’s joint angles by the algorithm. The human upper-body motions are successfully imitated by the NAO humanoid robot in real time. In the second part, the human action recognition algorithm is implemented for upper-body gestures. A human action dataset is also created for the upper-body movements. Each action is performed 10 times by twenty-four users. The collected joint angles are divided into six action classes. Extreme Learning Machines (ELMs) are used to classify the human actions. Additionally, the Feed-Forward Neural Networks (FNNs) and K-Nearest Neighbor (K-NN) classifiers are used for comparison. According to the comparative results, ELMs produce a good human action recognition performance

    Rehabilitation Exergames: use of motion sensing and machine learning to quantify exercise performance in healthy volunteers

    Get PDF
    Background: Performing physiotherapy exercises in front of a physiotherapist yields qualitative assessment notes and immediate feedback. However, practicing the exercises at home lacks feedback on how well or not patients are performing the prescribed tasks. The absence of proper feedback might result in patients doing the exercises incorrectly, which could worsen their condition. Objective: We propose the use of two machine learning algorithms, namely Dynamic Time Warping (DTW) and Hidden Markov Model (HMM), to quantitively assess the patient’s performance with respects to a reference. Methods: Movement data were recorded using a Kinect depth sensor, capable of detecting 25 joints in the human skeleton model, and were compared to those of a reference. 16 participants were recruited to perform four different exercises: shoulder abduction, hip abduction, lunge, and sit-to-stand. Their performance was compared to that of a physiotherapist as a reference. Results: Both algorithms show a similar trend in assessing participants' performance. However, their sensitivity level was different. While DTW was more sensitive to small changes, HMM captured a general view of the performance, being less sensitive to the details. Conclusions: The chosen algorithms demonstrated their capacity to objectively assess physical therapy performances. HMM may be more suitable in the early stages of a physiotherapy program to capture and report general performance, whilst DTW could be used later on to focus on the detail

    Interface gestuelle pour la commande d'un capteur 3D tenu en main

    Get PDF
    Ce mémoire porte sur la conception d'une interface utilisateur basée sur la reconnaissance de gestes pour la commande d'un capteur 3D tenu en main. L'interface proposée permet à l'opérateur d'un tel équipement de commander le logiciel à distance alors qu'il se déplace autour d'un objet à numériser sans devoir revenir auprès du poste de travail. À cet effet, un prototype fonctionnel est conçu au moyen d'une caméra Azure Kinect pointée vers l'utilisateur. Un corpus de gestes de la main est défini et reconnu au moyen d'algorithmes d'apprentissage automatique, et des métaphores d'interactions sont proposées pour la transformation rigide 3D d'un objet virtuel à l'écran. Ces composantes sont implantées dans un prototype fonctionnel compatible avec le logiciel VXelements de Creaform.This thesis presents the development of a gesture-based user interface for the operation of handheld 3D scanning devices. This user interface allows the user to remotely engage with the software while walking around the target object. To this end, we develop a prototype using an Azure Kinect sensor pointed at the user. We propose a set of hand gestures and a machine learning-based approach to classification for triggering momentary actions in the software. Additionally, we define interaction metaphors for applying 3D rigid transformations to a virtual object on screen. We implement these components into a proof-of-concept application compatible with Creaform VXelements

    Portfolio of Electroacoustic Compositions with Commentaries

    Get PDF
    This portfolio consists of electroacoustic compositions which were primarily realised through the use of corporeally informed compositional practices. The manner in which a composer interacts with the compositional tools and musical materials at their disposal is a defining factor in the creation of musical works. Although the use of computers in the practice of electroacoustic composition has extended the range of sonic possibilities afforded to composers, it has also had a negative impact on the level of physical interaction that composers have with these musical materials. This thesis is an investigation into the use of mediation technologies with the aim of circumventing issues relating to the physical performance of electroacoustic music. This line of inquiry has led me to experiment with embedded computers, wearable technologies, and a range of various sensors. The specific tools that were used in the creation of the pieces within this portfolio are examined in detail within this thesis. I also provide commentaries and analysis of the eleven electroacoustic works which comprise this portfolio, describing the thought processes that led to their inception, the materials used in their creation, and the tools and techniques that I employed throughout the compositional process
    corecore