115 research outputs found
Novel Muscle Monitoring by Radiomyography(RMG) and Application to Hand Gesture Recognition
Conventional electromyography (EMG) measures the continuous neural activity
during muscle contraction, but lacks explicit quantification of the actual
contraction. Mechanomyography (MMG) and accelerometers only measure body
surface motion, while ultrasound, CT-scan and MRI are restricted to in-clinic
snapshots. Here we propose a novel radiomyography (RMG) for continuous muscle
actuation sensing that can be wearable and touchless, capturing both
superficial and deep muscle groups. We verified RMG experimentally by a forearm
wearable sensor for detailed hand gesture recognition. We first converted the
radio sensing outputs to the time-frequency spectrogram, and then employed the
vision transformer (ViT) deep learning network as the classification model,
which can recognize 23 gestures with an average accuracy up to 99% on 8
subjects. By transfer learning, high adaptivity to user difference and sensor
variation were achieved at an average accuracy up to 97%. We further
demonstrated RMG to monitor eye and leg muscles and achieved high accuracy for
eye movement and body postures tracking. RMG can be used with synchronous EMG
to derive stimulation-actuation waveforms for many future applications in
kinesiology, physiotherapy, rehabilitation, and human-machine interface
Collaborative robot control with hand gestures
Mestrado de dupla diplomação com a Université Libre de TunisThis thesis focuses on hand gesture recognition by proposing an architecture to control a collaborative robot in real-time vision based on hand detection, tracking, and gesture recognition for interaction with an application via hand gestures. The first stage of our system allows detecting and tracking a bar e hand in a cluttered background using skin detection and contour comparison. The second stage allows recognizing hand gestures using a Machine learning method algorithm. Finally an interface has been developed to control the robot over.
Our hand gesture recognition system consists of two parts, in the first part for every frame captured from a camera we extract the keypoints for every training image using a machine learning algorithm, and we appoint the keypoints from every image into a keypoint map. This map is treated as an input for our processing algorithm which uses several methods to recognize the fingers in each hand.
In the second part, we use a 3D camera with Infrared capabilities to get a 3D model of the hand to implement it in our system, after that we track the fingers in each hand and recognize them which made it possible to count the extended fingers and to distinguish each finger pattern.
An interface to control the robot has been made that utilizes the previous steps that gives a real-time process and a dynamic 3D representation.Esta dissertação trata do reconhecimento de gestos realizados com a mão humana, propondo uma arquitetura para interagir com um robô colaborativo, baseado em visão computacional, rastreamento e reconhecimento de gestos. O primeiro estágio do sistema desenvolvido permite detectar e rastrear a presença de uma mão em um fundo desordenado usando detecção de pele e comparação de contornos. A segunda fase permite reconhecer os gestos das mãos usando um algoritmo do método de aprendizado de máquina. Finalmente, uma interface foi desenvolvida para interagir com robô. O sistema de reconhecimento de gestos manuais está dividido em duas partes. Na primeira parte, para cada quadro capturado de uma câmera, foi extraído os pontos-chave de cada imagem de treinamento usando um algoritmo de aprendizado de máquina e nomeamos os pontos-chave de cada imagem em um mapa de pontos-chave. Este mapa é tratado como uma entrada para o algoritmo de processamento que usa vários métodos para reconhecer os dedos em cada mão. Na segunda parte, foi utilizado uma câmera 3D com recursos de infravermelho para obter um modelo 3D da mão para implementá-lo em no sistema desenvolvido, e então, foi realizado os rastreio dos dedos de cada mão seguido pelo reconhecimento que possibilitou contabilizar os dedos estendidos e para distinguir cada padrão de dedo. Foi elaborado uma interface para interagir com o robô manipulador que utiliza as etapas anteriores que fornece um processo em tempo real e uma representação 3D dinâmica
A Two-Stage Image Frame Extraction Model -ISLKE for Live Gesture Analysis on Indian Sign Language
The new industry revolution focused on Smart and interconnected technologies along with the Robotics and Artificial Intelligence, Machine Learning, Data analytics etc. on the real time data to produce the value-added products. The ways the goods are being produced are aligned with the people’s life style which is witnessed in terms of wearable smart devices, digital assistants, self-driving cars etc. Over the last few years, an evident capturing of the true potential of Industry 4.0 in health service domain is also observed. In the same context, Sign Language Recognition- a breakthrough in the live video processing domain, helps the deaf and mute communities grab the attention of many researchers. From the research insights, it is clearly evident that precise extraction and interpretation of the gesture data along with an addressal of the prevailing limitations is a crucial task. This has driven the work to come out with a unique keyframe extraction model focusing on the preciseness of the interpretation. The proposed model ISLKE deals with a clustering-based two stage keyframe extraction process. It has experimented on daily usage vocabulary of Indian Sign Language (ISL) and attained an average accuracy of 96% in comparison to the ground-truth facts. It is also observed that with the two-stage approach, filtering of uninformative frames has reduced complexity and computational efforts. These key leads, help in the further development of commercial communication applications in order to reach the speech and hearing disorder communities
A Multicamera System for Gesture Tracking With Three Dimensional Hand Pose Estimation
The goal of any visual tracking system is to successfully detect then follow an object of interest through a sequence of images. The difficulty of tracking an object depends on the dynamics, the motion and the characteristics of the object as well as on the environ ment. For example, tracking an articulated, self-occluding object such as a signing hand has proven to be a very difficult problem. The focus of this work is on tracking and pose estimation with applications to hand gesture interpretation. An approach that attempts to integrate the simplicity of a region tracker with single hand 3D pose estimation methods is presented. Additionally, this work delves into the pose estimation problem. This is ac complished by both analyzing hand templates composed of their morphological skeleton, and addressing the skeleton\u27s inherent instability. Ligature points along the skeleton are flagged in order to determine their effect on skeletal instabilities. Tested on real data, the analysis finds the flagging of ligature points to proportionally increase the match strength of high similarity image-template pairs by about 6%. The effectiveness of this approach is further demonstrated in a real-time multicamera hand tracking system that tracks hand gestures through three-dimensional space as well as estimate the three-dimensional pose of the hand
Advances in Integrated Circuits and Systems for Wearable Biomedical Electrical Impedance Tomography
Electrical impedance tomography (EIT) is an impedance mapping technique that can be used to image the inner impedance distribution of the subject under test. It is non-invasive, inexpensive and radiation-free, while at the same time it can facilitate long-term and real-time dynamic monitoring. Thus, EIT lends itself particularly well to the development of a bio-signal monitoring/imaging system in the form of wearable technology. This work focuses on EIT system hardware advancement using complementary metal oxide semiconductor (CMOS) technology. It presents the design and testing of application specific integrated circuit (ASIC) and their successful use in two bio-medical applications, namely, neonatal lung function monitoring and human-machine interface (HMI) for prosthetic hand control. Each year fifteen million babies are born prematurely, and up to 30% suffer from lung disease. Although respiratory support, especially mechanical ventilation, can improve their survival, it also can cause injury to their vulnerable lungs resulting in severe and chronic pulmonary morbidity lasting into adulthood, thus an integrated wearable EIT system for neonatal lung function monitoring is urgently needed. In this work, two wearable belt systems are presented. The first belt features a miniaturized active electrode module built around an analog front-end ASIC which is fabricated with 0.35-µm high-voltage process technology with ±9 V power supplies and occupies a total die area of 3.9 mm². The ASIC offers a high power active current driver capable of up to 6 mAp-p output, and wideband active buffer for EIT recording as well as contact impedance monitoring. The belt has a bandwidth of 500 kHz, and an image frame rate of 107 frame/s. To further improve the system, the active electrode module is integrated into one ASIC. It contains a fully differential current driver, a current feedback instrumentation amplifier (IA), a digital controller and multiplexors with a total die area of 9.6 mm². Compared to the conventional active electrode architecture employed in the first EIT belt, the second belt features a new architecture. It allows programmable flexible electrode current drive and voltage sense patterns under simple digital control. It has intimate connections to the electrodes for the current drive and to the IA for direct differential voltage measurement providing superior common-mode rejection ratio (CMRR) up to 74 dB, and with active gain, the noise level can be reduced by a factor of √3 using the adjacent scan. The second belt has a wider operating bandwidth of 1 MHz and multi-frequency operation. The image frame rate is 122 frame/s, the fastest wearable EIT reported to date. It measures impedance with 98% accuracy and has less than 0.5 Ω and 1° variation across all channels. In addition the ASIC facilitates several other functionalities to provide supplementary clinical information at the bedside. With the advancement of technology and the ever-increasing fusion of computer and machine into daily life, a seamless HMI system that can recognize hand gestures and motions and allow the control of robotic machines or prostheses to perform dexterous tasks, is a target of research. Originally developed as an imaging technique, EIT can be used with a machine learning technique to track bones and muscles movement towards understanding the human user’s intentions and ultimately controlling prosthetic hand applications. For this application, an analog front-end ASIC is designed using 0.35-µm standard process technology with ±1.65 V power supplies. It comprises a current driver capable of differential drive and a low noise (9μVrms) IA with a CMRR of 80 dB. The function modules occupy an area of 0.07 mm². Using the ASIC, a complete HMI system based on the EIT principle for hand prosthesis control has been presented, and the user’s forearm inner bio-impedance redistribution is assessed. Using artificial neural networks, bio-impedance redistribution can be learned so as to recognise the user’s intention in real-time for prosthesis operation. In this work, eleven hand motions are designed for prosthesis operation. Experiments with five subjects show that the system can achieve an overall recognition accuracy of 95.8%
Recommended from our members
Generating 3D product design models in real-time using hand motion and gesture
This thesis was submitted for the degree of Master of Philosophy and awarded by Brunel University.Three dimensional product design models are widely used in conceptual design and in the early stage of prototyping during the design processes. A product design specification often demands a substantial amount of 3D models to be constructed within a short period of time. Current methods begin with designers sketching product concepts in 2D using pencil and paper, which in turn are then translated into 3D models by a design individual with CAD expertise, using a 3D modelling software package such as Pro Engineer, Solid Works, Auto CAD etc. Several novel methods have been used to incorporate hand motion as a way of interacting with computers. There are three main types of technology available to capture motion data, capable of translating human motion into numeric data which can be read by a computer system. The first being, hand gesture glove-based systems such as “Cyberglove”, these systems are generally used to capture hand gesture and joint angle information. The second is full body motion capture systems, optical and non-optical-based, and finally vision based gesture recognition systems which capture full degree of - freedom (DOF) hand motion estimation. There has yet to be a method using any of the above mentioned input devices to rapidly produce 3D product design models in real time, using hand motion and gestures. In this research, a novel method is presented, using a motion capture system to capture hand gestures and motion in real time, to recreate 3D curves and surfaces, which can be translated into 3D product design models. The main aim of this research is to develop a hand motion and gesture-based rapid 3D product modelling method, allowing designers to interactively sketch out 3D concepts in real time using a virtual workspace.
A database of a number of hand signs was built for both architectural hand signs (preliminary study) and Product Design hand signs. A marker set model with a total of eight markers (five on the left hand and three on right hand/marker pen) was designed and used in the capture of hand gestures with the use of an Optical Motion Capture System. A preliminary testing session was successfully completed to determine whether the Motion Capture system would be suitable for a real-time application, by effectively modelling a train station in an offline state using hand motion and gesture. An OpenGL software application was programmed using C++ and the Microsoft Foundation Classes which was used to communicate and pass information of captured motion from the EVaRT system to the user
Exploring the Potential of Wrist-Worn Gesture Sensing
This thesis aims to explore the potential of wrist-worn gesture sensing. There has been a large amount of work on gesture recognition in the past utilizing different kinds of sensors. However, gesture sets tested across different work were all different, making it hard to compare them. Also, there has not been enough work on understanding what types of gestures are suitable for
wrist-worn devices. Our work addresses these two problems and makes two main contributions compared to previous work: the specification of larger gesture sets, which were verified through an elicitation study generated by combining previous work; and an evaluation of the potential of gesture sensing with wrist-worn sensors.
We developed a gesture recognition system, WristRec, which is a low-cost wrist-worn device utilizing bend sensors for gesture recognition. The design of WristRec aims to measure the tendon movement at the wrist while people perform gestures. We conducted a four-part study to verify the validity of the approach and the extent of gestures which can be detected using a wrist-worn system.
During the initial stage, we verified the feasibility of WristRec using the Dynamic Time Warping (DTW) algorithm to perform gesture classification on a group of 5 gestures, the gesture set of the MYO armband. Next, we conducted an elicitation study to understand the trade-offs between hand, wrist, and arm gestures. The study helped us understand the type of gestures which wrist-worn system should be able to recognize. It also served as the base of our gesture set and our evaluation on the gesture sets used in the previous research. To evaluate the overall potential of wrist-worn recognition, we explored the design of hardware to recognize gestures by contrasting an Inertial measurement unit (IMU) only recognizer (the Serendipity system of Wen et al.) with our system. We assessed accuracies on a consensus gesture set and on a 27-gesture referent set, both extracted from the result of our elicitation study.
Finally, we discuss the implications of our work both to the comparative evaluation of systems and to the design of enhanced hardware sensing
- …