78 research outputs found
PENGENALAN GESTURE TANGAN SECARA REAL-TIME MENGGUNAKAN SENSOR EMG DAN ANALISIS AMPLITUDO
PENGENALAN GESTURE TANGAN SECARA REAL-TIME MENGGUNAKAN SENSOR EMG DAN ANALISIS AMPLITUD
Design of a low-cost sensor matrix for use in human-machine interactions on the basis of myographic information
Myographic sensor matrices in the field of human-machine interfaces are often poorly developed and not pushing the limits in terms of a high spatial resolution. Many studies use sensor matrices as a tool to access myographic data for intention prediction algorithms regardless of the human anatomy and used sensor principles. The necessity for more sophisticated sensor matrices in the field of myographic human-machine interfaces is essential, and the community already called out for new sensor solutions. This work follows the neuromechanics of the
human and designs customized sensor principles to acquire the
occurring phenomena. Three low-cost sensor modalities
Electromyography, Mechanomyography, and Force Myography) were
developed in a miniaturized size and tested in a pre-evaluation study. All three sensors comprise the characteristic myographic information of its modality. Based on the pre-evaluated sensors, a sensor matrix with 32 exchangeable and high-density sensor modules was designed. The sensor matrix can be applied around the human limbs and takes the human anatomy into account. A data transmission protocol was customized for interfacing the sensor matrix to the periphery with
reduced wiring. The designed sensor matrix offers high-density and
multimodal myographic information for the field of human-machine interfaces. Especially the fields of prosthetics and telepresence can benefit from the higher spatial resolution of the sensor matrix
Body-Area Capacitive or Electric Field Sensing for Human Activity Recognition and Human-Computer Interaction: A Comprehensive Survey
Due to the fact that roughly sixty percent of the human body is essentially
composed of water, the human body is inherently a conductive object, being able
to, firstly, form an inherent electric field from the body to the surroundings
and secondly, deform the distribution of an existing electric field near the
body. Body-area capacitive sensing, also called body-area electric field
sensing, is becoming a promising alternative for wearable devices to accomplish
certain tasks in human activity recognition and human-computer interaction.
Over the last decade, researchers have explored plentiful novel sensing systems
backed by the body-area electric field. On the other hand, despite the
pervasive exploration of the body-area electric field, a comprehensive survey
does not exist for an enlightening guideline. Moreover, the various hardware
implementations, applied algorithms, and targeted applications result in a
challenging task to achieve a systematic overview of the subject. This paper
aims to fill in the gap by comprehensively summarizing the existing works on
body-area capacitive sensing so that researchers can have a better view of the
current exploration status. To this end, we first sorted the explorations into
three domains according to the involved body forms: body-part electric field,
whole-body electric field, and body-to-body electric field, and enumerated the
state-of-art works in the domains with a detailed survey of the backed sensing
tricks and targeted applications. We then summarized the three types of sensing
frontends in circuit design, which is the most critical part in body-area
capacitive sensing, and analyzed the data processing pipeline categorized into
three kinds of approaches. Finally, we described the challenges and outlooks of
body-area electric sensing
Pervasive Quantied-Self using Multiple Sensors
abstract: The advent of commercial inexpensive sensors and the advances in information and communication technology (ICT) have brought forth the era of pervasive Quantified-Self. Automatic diet monitoring is one of the most important aspects for Quantified-Self because it is vital for ensuring the well-being of patients suffering from chronic diseases as well as for providing a low cost means for maintaining the health for everyone else. Automatic dietary monitoring consists of: a) Determining the type and amount of food intake, and b) Monitoring eating behavior, i.e., time, frequency, and speed of eating. Although there are some existing techniques towards these ends, they suffer from issues of low accuracy and low adherence. To overcome these issues, multiple sensors were utilized because the availability of affordable sensors that can capture the different aspect information has the potential for increasing the available knowledge for Quantified-Self. For a), I envision an intelligent dietary monitoring system that automatically identifies food items by using the knowledge obtained from visible spectrum camera and infrared spectrum camera. This system is able to outperform the state-of-the-art systems for cooked food recognition by 25% while also minimizing user intervention. For b), I propose a novel methodology, IDEA that performs accurate eating action identification within eating episodes with an average F1-score of 0.92. This is an improvement of 0.11 for precision and 0.15 for recall for the worst-case users as compared to the state-of-the-art. IDEA uses only a single wrist-band which includes four sensors and provides feedback on eating speed every 2 minutes without obtaining any manual input from the user.Dissertation/ThesisDoctoral Dissertation Computer Engineering 201
Recommended from our members
Thumbs up, thumbs down:non-verbal human-robot interaction through real-time EMG classification via inductive and supervised transductive transfer learning
In this study, we present a transfer learning method for gesture classification via an inductive and supervised transductive approach with an electromyographic dataset gathered via the Myo armband. A ternary gesture classification problem is presented by states of ’thumbs up’, ’thumbs down’, and ’relax’ in order to communicate in the affirmative or negative in a non-verbal fashion to a machine. Of the nine statistical learning paradigms benchmarked over 10-fold cross validation (with three methods of feature selection), an ensemble of Random Forest and Support Vector Machine through voting achieves the best score of 91.74% with a rule-based feature selection method. When new subjects are considered, this machine learning approach fails to generalise new data, and thus the processes of Inductive and Supervised Transductive Transfer Learning are introduced with a short calibration exercise (15 s). Failure of generalisation shows that 5 s of data per-class is the strongest for classification (versus one through seven seconds) with only an accuracy of 55%, but when a short 5 s per class calibration task is introduced via the suggested transfer method, a Random Forest can then classify unseen data from the calibrated subject at an accuracy of around 97%, outperforming the 83% accuracy boasted by the proprietary Myo system. Finally, a preliminary application is presented through social interaction with a humanoid Pepper robot, where the use of our approach and a most-common-class metaclassifier achieves 100% accuracy for all trials of a ‘20 Questions’ game
Exploring the Potential of Wrist-Worn Gesture Sensing
This thesis aims to explore the potential of wrist-worn gesture sensing. There has been a large amount of work on gesture recognition in the past utilizing different kinds of sensors. However, gesture sets tested across different work were all different, making it hard to compare them. Also, there has not been enough work on understanding what types of gestures are suitable for
wrist-worn devices. Our work addresses these two problems and makes two main contributions compared to previous work: the specification of larger gesture sets, which were verified through an elicitation study generated by combining previous work; and an evaluation of the potential of gesture sensing with wrist-worn sensors.
We developed a gesture recognition system, WristRec, which is a low-cost wrist-worn device utilizing bend sensors for gesture recognition. The design of WristRec aims to measure the tendon movement at the wrist while people perform gestures. We conducted a four-part study to verify the validity of the approach and the extent of gestures which can be detected using a wrist-worn system.
During the initial stage, we verified the feasibility of WristRec using the Dynamic Time Warping (DTW) algorithm to perform gesture classification on a group of 5 gestures, the gesture set of the MYO armband. Next, we conducted an elicitation study to understand the trade-offs between hand, wrist, and arm gestures. The study helped us understand the type of gestures which wrist-worn system should be able to recognize. It also served as the base of our gesture set and our evaluation on the gesture sets used in the previous research. To evaluate the overall potential of wrist-worn recognition, we explored the design of hardware to recognize gestures by contrasting an Inertial measurement unit (IMU) only recognizer (the Serendipity system of Wen et al.) with our system. We assessed accuracies on a consensus gesture set and on a 27-gesture referent set, both extracted from the result of our elicitation study.
Finally, we discuss the implications of our work both to the comparative evaluation of systems and to the design of enhanced hardware sensing
Recent Advances in Motion Analysis
The advances in the technology and methodology for human movement capture and analysis over the last decade have been remarkable. Besides acknowledged approaches for kinematic, dynamic, and electromyographic (EMG) analysis carried out in the laboratory, more recently developed devices, such as wearables, inertial measurement units, ambient sensors, and cameras or depth sensors, have been adopted on a wide scale. Furthermore, computational intelligence (CI) methods, such as artificial neural networks, have recently emerged as promising tools for the development and application of intelligent systems in motion analysis. Thus, the synergy of classic instrumentation and novel smart devices and techniques has created unique capabilities in the continuous monitoring of motor behaviors in different fields, such as clinics, sports, and ergonomics. However, real-time sensing, signal processing, human activity recognition, and characterization and interpretation of motion metrics and behaviors from sensor data still representing a challenging problem not only in laboratories but also at home and in the community. This book addresses open research issues related to the improvement of classic approaches and the development of novel technologies and techniques in the domain of motion analysis in all the various fields of application
Requirement analysis and sensor specifications – First version
In this first version of the deliverable, we make the following contributions: to design the
WEKIT capturing platform and the associated experience capturing API, we use a
methodology for system engineering that is relevant for different domains such as: aviation,
space, and medical and different professions such as: technicians, astronauts, and medical
staff. Furthermore, in the methodology, we explore the system engineering process and how
it can be used in the project to support the different work packages and more importantly
the different deliverables that will follow the current.
Next, we provide a mapping of high level functions or tasks (associated with experience
transfer from expert to trainee) to low level functions such as: gaze, voice, video, body
posture, hand gestures, bio-signals, fatigue levels, and location of the user in the
environment. In addition, we link the low level functions to their associated sensors.
Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their
technical specifications, possible limitations, standards, and platforms.
We outline a set of recommendations pertaining to the sensors that are most relevant for
the WEKIT project taking into consideration the environmental, technical and human
factors described in other deliverables. We recommend Microsoft Hololens (for Augmented
reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift
(for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand
gesture tracking). For eye tracking, an existing eye-tracking system can be customised to
complement the augmented reality glasses, and built-in microphone of the augmented
reality glasses can capture the expert’s voice. We propose a modular approach for the design
of the WEKIT experience capturing system, and recommend that the capturing system
should have sufficient storage or transmission capabilities.
Finally, we highlight common issues associated with the use of different sensors. We
consider that the set of recommendations can be useful for the design and integration of the
WEKIT capturing platform and the WEKIT experience capturing API to expedite the time
required to select the combination of sensors which will be used in the first prototype.WEKI
- …