1,278 research outputs found

    Moment Invariant Features Extraction for Hand Gesture Recognition of Sign Language based on SIBI

    Get PDF
    Myo Armband became an immersive technology to help deaf people for communication each other. The problem on Myo sensor is unstable clock rate. It causes the different length data for the same period even on the same gesture. This research proposes Moment Invariant Method to extract the feature of sensor data from Myo. This method reduces the amount of data and makes the same length of data. This research is user-dependent, according to the characteristics of Myo Armband. The testing process was performed by using alphabet A to Z on SIBI, Indonesian Sign Language, with static and dynamic finger movements. There are 26 class of alphabets and 10 variants in each class. We use min-max normalization for guarantying the range of data. We use K-Nearest Neighbor method to classify dataset. Performance analysis with leave-one-out-validation method produced an accuracy of 82.31%. It requires a more advanced method of classification to improve the performance on the detection results

    Sensing via signal analysis, analytics, and cyberbiometric patterns

    Get PDF
    Includes bibliographical references.2022 Fall.Internet-connected, or Internet of Things (IoT), sensor technologies have been increasingly incorporated into everyday technology and processes. Their functions are situationally dependent and have been used for vital recordings such as electrocardiograms, gait analysis and step counting, fall detection, and environmental analysis. For instance, environmental sensors, which exist through various technologies, are used to monitor numerous domains, including but not limited to pollution, water quality, and the presence of biota, among others. Past research into IoT sensors has varied depending on the technology. For instance, previous environmental gas sensor IoT research has focused on (i) the development of these sensors for increased sensitivity and increased lifetimes, (ii) integration of these sensors into sensor arrays to combat cross-sensitivity and background interferences, and (iii) sensor network development, including communication between widely dispersed sensors in a large-scale environment. IoT inertial measurement units (IMU's), such as accelerometers and gyroscopes, have been previously researched for gait analysis, movement detection, and gesture recognition, which are often related to human-computer interface (HCI). Methods of IoT Device feature-based pattern recognition for machine learning (ML) and artificial intelligence (AI) are frequently investigated as well, including primitive classification methods and deep learning techniques. The result of this research gives insight into each of these topics individually, i.e., using a specific sensor technology to detect carbon monoxide in an indoor environment, or using accelerometer readings for gesture recognition. Less research has been performed on analyzing the systems aspects of the IoT sensors themselves. However, an important part of attaining overall situational awareness is authenticating the surroundings, which in the case of IoT means the individual sensors, humans interacting with the sensors, and other elements of the surroundings. There is a clear opportunity for the systematic evaluation of the identity and performance of an IoT sensor/sensor array within a system that is to be utilized for "full situational awareness". This awareness may include (i) non-invasive diagnostics (i.e., what is occurring inside the body), (ii) exposure analysis (i.e., what has gone into the body through both respiratory and eating/drinking pathways), and (iii) potential risk of exposure (i.e., what the body is exposed to environmentally). Simultaneously, the system has the capability to harbor security measures through the same situational assessment in the form of multiple levels of biometrics. Through the interconnective abilities of the IoT sensors, it is possible to integrate these capabilities into one portable, hand-held system. The system will exist within a "magic wand", which will be used to collect the various data needed to assess the environment of the user, both inside and outside of their bodies. The device can also be used to authenticate the user, as well as the system components, to discover potential deception within the system. This research introduces levels of biometrics for various scenarios through the investigation of challenge-based biometrics; that is, biometrics based upon how the sensor, user, or subject of study responds to a challenge. These will be applied to multiple facets surrounding "situational awareness" for living beings, non-human beings, and non-living items or objects (which we have termed "abiometrics"). Gesture recognition for intent of sensing was first investigated as a means of deliberate activation of sensors/sensor arrays for situational awareness while providing a level of user authentication through biometrics. Equine gait analysis was examined next, and the level of injury in the lame limbs of the horse was quantitatively measured and classified using data from IoT sensors. Finally, a method of evaluating the identity and health of a sensor/sensory array was examined through different challenges to their environments

    Time-Elastic Generative Model for Acceleration Time Series in Human Activity Recognition

    Get PDF
    Body-worn sensors in general and accelerometers in particular have been widely used in order to detect human movements and activities. The execution of each type of movement by each particular individual generates sequences of time series of sensed data from which specific movement related patterns can be assessed. Several machine learning algorithms have been used over windowed segments of sensed data in order to detect such patterns in activity recognition based on intermediate features (either hand-crafted or automatically learned from data). The underlying assumption is that the computed features will capture statistical differences that can properly classify different movements and activities after a training phase based on sensed data. In order to achieve high accuracy and recall rates (and guarantee the generalization of the system to new users), the training data have to contain enough information to characterize all possible ways of executing the activity or movement to be detected. This could imply large amounts of data and a complex and time-consuming training phase, which has been shown to be even more relevant when automatically learning the optimal features to be used. In this paper, we present a novel generative model that is able to generate sequences of time series for characterizing a particular movement based on the time elasticity properties of the sensed data. The model is used to train a stack of auto-encoders in order to learn the particular features able to detect human movements. The results of movement detection using a newly generated database with information on five users performing six different movements are presented. The generalization of results using an existing database is also presented in the paper. The results show that the proposed mechanism is able to obtain acceptable recognition rates (F = 0.77) even in the case of using different people executing a different sequence of movements and using different hardware

    On the role of gestures in human-robot interaction

    Get PDF
    This thesis investigates the gestural interaction problem and in particular the usage of gestures for human-robot interaction. The lack of a clear definition of the problem statement and a common terminology resulted in a fragmented field of research where building upon prior work is rare. The scope of the research presented in this thesis, therefore, consists in laying the foundation to help the community to build a more homogeneous research field. The main contributions of this thesis are twofold: (i) a taxonomy to define gestures; and (ii) an ingegneristic definition of the gestural interaction problem. The contributions resulted is a schema to represent the existing literature in a more organic way, helping future researchers to identify existing technologies and applications, also thanks to an extensive literature review. Furthermore, the defined problem has been studied in two of its specialization: (i) direct control and (ii) teaching of a robotic manipulator, which leads to the development of technological solutions for gesture sensing, detection and classification, which can possibly be applied to other contexts

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces
    corecore