232 research outputs found

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    The "Federica" hand: a simple, very efficient prothesis

    Get PDF
    Hand prostheses partially restore hand appearance and functionalities. Not everyone can afford expensive prostheses and many low-cost prostheses have been proposed. In particular, 3D printers have provided great opportunities by simplifying the manufacturing process and reducing costs. Generally, active prostheses use multiple motors for fingers movement and are controlled by electromyographic (EMG) signals. The "Federica" hand is a single motor prosthesis, equipped with an adaptive grasp and controlled by a force-myographic signal. The "Federica" hand is 3D printed and has an anthropomorphic morphology with five fingers, each consisting of three phalanges. The movement generated by a single servomotor is transmitted to the fingers by inextensible tendons that form a closed chain; practically, no springs are used for passive hand opening. A differential mechanical system simultaneously distributes the motor force in predefined portions on each finger, regardless of their actual positions. Proportional control of hand closure is achieved by measuring the contraction of residual limb muscles by means of a force sensor, replacing the EMG. The electrical current of the servomotor is monitored to provide the user with a sensory feedback of the grip force, through a small vibration motor. A simple Arduino board was adopted as processing unit. The differential mechanism guarantees an efficient transfer of mechanical energy from the motor to the fingers and a secure grasp of any object, regardless of its shape and deformability. The force sensor, being extremely thin, can be easily embedded into the prosthesis socket and positioned on both muscles and tendons; it offers some advantages over the EMG as it does not require any electrical contact or signal processing to extract information about the muscle contraction intensity. The grip speed is high enough to allow the user to grab objects on the fly: from the muscle trigger until to the complete hand closure, "Federica" takes about half a second. The cost of the device is about 100 US$. Preliminary tests carried out on a patient with transcarpal amputation, showed high performances in controlling the prosthesis, after a very rapid training session. The "Federica" hand turned out to be a lightweight, low-cost and extremely efficient prosthesis. The project is intended to be open-source: all the information needed to produce the prosthesis (e.g. CAD files, circuit schematics, software) can be downloaded from a public repository. Thus, allowing everyone to use the "Federica" hand and customize or improve it

    A Piezoresistive Sensor to Measure Muscle Contraction and Mechanomyography

    Get PDF
    Measurement of muscle contraction is mainly achieved through electromyography (EMG) and is an area of interest for many biomedical applications, including prosthesis control and human machine interface. However, EMG has some drawbacks, and there are also alternative methods for measuring muscle activity, such as by monitoring the mechanical variations that occur during contraction. In this study, a new, simple, non-invasive sensor based on a force-sensitive resistor (FSR) which is able to measure muscle contraction is presented. The sensor, applied on the skin through a rigid dome, senses the mechanical force exerted by the underlying contracting muscles. Although FSR creep causes output drift, it was found that appropriate FSR conditioning reduces the drift by fixing the voltage across the FSR and provides voltage output proportional to force. In addition to the larger contraction signal, the sensor was able to detect the mechanomyogram (MMG), i.e., the little vibrations which occur during muscle contraction. The frequency response of the FSR sensor was found to be large enough to correctly measure the MMG. Simultaneous recordings from flexor carpi ulnaris showed a high correlation (Pearson's r > 0.9) between the FSR output and the EMG linear envelope. Preliminary validation tests on healthy subjects showed the ability of the FSR sensor, used instead of the EMG, to proportionally control a hand prosthesis, achieving comparable performances

    Design of a low-cost sensor matrix for use in human-machine interactions on the basis of myographic information

    Get PDF
    Myographic sensor matrices in the field of human-machine interfaces are often poorly developed and not pushing the limits in terms of a high spatial resolution. Many studies use sensor matrices as a tool to access myographic data for intention prediction algorithms regardless of the human anatomy and used sensor principles. The necessity for more sophisticated sensor matrices in the field of myographic human-machine interfaces is essential, and the community already called out for new sensor solutions. This work follows the neuromechanics of the human and designs customized sensor principles to acquire the occurring phenomena. Three low-cost sensor modalities Electromyography, Mechanomyography, and Force Myography) were developed in a miniaturized size and tested in a pre-evaluation study. All three sensors comprise the characteristic myographic information of its modality. Based on the pre-evaluated sensors, a sensor matrix with 32 exchangeable and high-density sensor modules was designed. The sensor matrix can be applied around the human limbs and takes the human anatomy into account. A data transmission protocol was customized for interfacing the sensor matrix to the periphery with reduced wiring. The designed sensor matrix offers high-density and multimodal myographic information for the field of human-machine interfaces. Especially the fields of prosthetics and telepresence can benefit from the higher spatial resolution of the sensor matrix

    Configuring Corporeality: Performing bodies, vibrations and new musical instruments.

    Get PDF
    How to define the relationship of human bodies, sound and technological instruments in musical performance? This enquiry investigates the issue through an iterative mode of research. Aesthetic and technical insights on sound and body art performance with new musical instruments combine with analytical views on technological embodiment in philosophy and cultural studies. The focus is on corporeality: the physiological, phenomenological and cultural basis of embodied practices. The thesis proposes configuration as an analytical device and a blueprint for artistic creation. Configuration defines the relationship of the human being and technology as one where they affect each other's properties through a continuous, situated negotiation. In musical performance, this involves a performer's intuition, cognition, and sensorimotor skills, an instrument's material, musical and computational properties, and sound's vibrational and auditive qualities. Two particular kinds of configuration feature in this enquiry. One arises from an experiment on the effect of vibration on the sensorimotor system and is fully developed through a subsequent installation for one visitor at a time. The other emerges from a scientific study of gesture expressivity through muscle physiological sensing and is consolidated into an ensuing body art performance for sound and light. Both artworks rely upon intensely intimate sensorial and physical experiences, uses and abuses of the performer's body and bioacoustic sound feedback as a material force. This work contends that particular configurations in musical performance reinforce, alter or disrupt societal criteria against which human bodies and technologies are assessed. Its contributions are: the notion of configuration, which affords an understanding of human-machine co-dependence and its politics; two sound-based artworks, joining and expanding musical performance and body art; two experiments, and their hardware and software tools, providing insights on physiological computing methods for corporeal human-computer interaction

    Proficiency-aware systems

    Get PDF
    In an increasingly digital world, technological developments such as data-driven algorithms and context-aware applications create opportunities for novel human-computer interaction (HCI). We argue that these systems have the latent potential to stimulate users and encourage personal growth. However, users increasingly rely on the intelligence of interactive systems. Thus, it remains a challenge to design for proficiency awareness, essentially demanding increased user attention whilst preserving user engagement. Designing and implementing systems that allow users to become aware of their own proficiency and encourage them to recognize learning benefits is the primary goal of this research. In this thesis, we introduce the concept of proficiency-aware systems as one solution. In our definition, proficiency-aware systems use estimates of the user's proficiency to tailor the interaction in a domain and facilitate a reflective understanding for this proficiency. We envision that proficiency-aware systems leverage collected data for learning benefit. Here, we see self-reflection as a key for users to become aware of necessary efforts to advance their proficiency. A key challenge for proficiency-aware systems is the fact that users often have a different self-perception of their proficiency. The benefits of personal growth and advancing one's repertoire might not necessarily be apparent to users, alienating them, and possibly leading to abandoning the system. To tackle this challenge, this work does not rely on learning strategies but rather focuses on the capabilities of interactive systems to provide users with the necessary means to reflect on their proficiency, such as showing calculated text difficulty to a newspaper editor or visualizing muscle activity to a passionate sportsperson. We first elaborate on how proficiency can be detected and quantified in the context of interactive systems using physiological sensing technologies. Through developing interaction scenarios, we demonstrate the feasibility of gaze- and electromyography-based proficiency-aware systems by utilizing machine learning algorithms that can estimate users' proficiency levels for stationary vision-dominant tasks (reading, information intake) and dynamic manual tasks (playing instruments, fitness exercises). Secondly, we show how to facilitate proficiency awareness for users, including design challenges on when and how to communicate proficiency. We complement this second part by highlighting the necessity of toolkits for sensing modalities to enable the implementation of proficiency-aware systems for a wide audience. In this thesis, we contribute a definition of proficiency-aware systems, which we illustrate by designing and implementing interactive systems. We derive technical requirements for real-time, objective proficiency assessment and identify design qualities of communicating proficiency through user reflection. We summarize our findings in a set of design and engineering guidelines for proficiency awareness in interactive systems, highlighting that proficiency feedback makes performance interpretable for the user.In einer zunehmend digitalen Welt schaffen technologische Entwicklungen - wie datengesteuerte Algorithmen und kontextabhängige Anwendungen - neuartige Interaktionsmöglichkeiten mit digitalen Geräten. Jedoch verlassen sich Nutzer oftmals auf die Intelligenz dieser Systeme, ohne dabei selbst auf eine persönliche Weiterentwicklung hinzuwirken. Wird ein solches Vorgehen angestrebt, verlangt dies seitens der Anwender eine erhöhte Aufmerksamkeit. Es ist daher herausfordernd, ein entsprechendes Design für Kompetenzbewusstsein (Proficiency Awareness) zu etablieren. Das primäre Ziel dieser Arbeit ist es, eine Methodik für das Design und die Implementierung von interaktiven Systemen aufzustellen, die Nutzer dabei unterstützen über ihre eigene Kompetenz zu reflektieren, um dadurch Lerneffekte implizit wahrnehmen können. Diese Arbeit stellt ein Konzept für fähigkeitsbewusste Systeme (proficiency-aware systems) vor, welche die Fähigkeiten von Nutzern abschätzen, die Interaktion entsprechend anpassen sowie das Bewusstsein der Nutzer über deren Fähigkeiten fördern. Hierzu sollten die Systeme gesammelte Daten von Nutzern einsetzen, um Lerneffekte sichtbar zu machen. Die Möglichkeit der Anwender zur Selbstreflexion ist hierbei als entscheidend anzusehen, um als Motivation zur Verbesserung der eigenen Fähigkeiten zu dienen. Eine zentrale Herausforderung solcher Systeme ist die Tatsache, dass Nutzer - im Vergleich zur Abschätzung des Systems - oft eine divergierende Selbstwahrnehmung ihrer Kompetenz haben. Im ersten Moment sind daher die Vorteile einer persönlichen Weiterentwicklung nicht unbedingt ersichtlich. Daher baut diese Forschungsarbeit nicht darauf auf, Nutzer über vorgegebene Lernstrategien zu unterrichten, sondern sie bedient sich der Möglichkeiten interaktiver Systeme, die Anwendern die notwendigen Hilfsmittel zur Verfügung stellen, damit diese selbst über ihre Fähigkeiten reflektieren können. Einem Zeitungseditor könnte beispielsweise die aktuelle Textschwierigkeit angezeigt werden, während einem passionierten Sportler dessen Muskelaktivität veranschaulicht wird. Zunächst wird herausgearbeitet, wie sich die Fähigkeiten der Nutzer mittels physiologischer Sensortechnologien erkennen und quantifizieren lassen. Die Evaluation von Interaktionsszenarien demonstriert die Umsetzbarkeit fähigkeitsbewusster Systeme, basierend auf der Analyse von Blickbewegungen und Muskelaktivität. Hierbei kommen Algorithmen des maschinellen Lernens zum Einsatz, die das Leistungsniveau der Anwender für verschiedene Tätigkeiten berechnen. Im Besonderen analysieren wir stationäre Aktivitäten, die hauptsächlich den Sehsinn ansprechen (Lesen, Aufnahme von Informationen), sowie dynamische Betätigungen, die die Motorik der Nutzer fordern (Spielen von Instrumenten, Fitnessübungen). Der zweite Teil zeigt auf, wie Systeme das Bewusstsein der Anwender für deren eigene Fähigkeiten fördern können, einschließlich der Designherausforderungen , wann und wie das System erkannte Fähigkeiten kommunizieren sollte. Abschließend wird die Notwendigkeit von Toolkits für Sensortechnologien hervorgehoben, um die Implementierung derartiger Systeme für ein breites Publikum zu ermöglichen. Die Forschungsarbeit beinhaltet eine Definition für fähigkeitsbewusste Systeme und veranschaulicht dieses Konzept durch den Entwurf und die Implementierung interaktiver Systeme. Ferner werden technische Anforderungen objektiver Echtzeitabschätzung von Nutzerfähigkeiten erforscht und Designqualitäten für die Kommunikation dieser Abschätzungen mittels Selbstreflexion identifiziert. Zusammengefasst sind die Erkenntnisse in einer Reihe von Design- und Entwicklungsrichtlinien für derartige Systeme. Insbesondere die Kommunikation, der vom System erkannten Kompetenz, hilft Anwendern, die eigene Leistung zu interpretieren

    Classification of 41 Hand and Wrist Movements via Surface Electromyogram Using Deep Neural Network

    Get PDF
    Surface electromyography (sEMG) is a non-invasive and straightforward way to allow the user to actively control the prosthesis. However, results reported by previous studies on using sEMG for hand and wrist movement classification vary by a large margin, due to several factors including but not limited to the number of classes and the acquisition protocol. The objective of this paper is to investigate the deep neural network approach on the classification of 41 hand and wrist movements based on the sEMG signal. The proposed models were trained and evaluated using the publicly available database from the Ninapro project, one of the largest public sEMG databases for advanced hand myoelectric prosthetics. Two datasets, DB5 with a low-cost 16 channels and 200 Hz sampling rate setup and DB7 with 12 channels and 2 kHz sampling rate setup, were used for this study. Our approach achieved an overall accuracy of 93.87 ± 1.49 and 91.69 ± 4.68% with a balanced accuracy of 84.00 ± 3.40 and 84.66 ± 4.78% for DB5 and DB7, respectively. We also observed a performance gain when considering only a subset of the movements, namely the six main hand movements based on six prehensile patterns from the Southampton Hand Assessment Procedure (SHAP), a clinically validated hand functional assessment protocol. Classification on only the SHAP movements in DB5 attained an overall accuracy of 98.82 ± 0.58% with a balanced accuracy of 94.48 ± 2.55%. With the same set of movements, our model also achieved an overall accuracy of 99.00% with a balanced accuracy of 91.27% on data from one of the amputee participants in DB7. These results suggest that with more data on the amputee subjects, our proposal could be a promising approach for controlling versatile prosthetic hands with a wide range of predefined hand and wrist movements
    • …
    corecore