59 research outputs found

    Wearable Wristworn Gesture Recognition Using Echo State Network

    Get PDF
    This paper presents a novel gesture sensing system for prosthetic limb control based on a pressure sensor array embedded in a wristband. The tendon movement which produces pressure change around the wrist can be detected by pressure sensors. A microcontroller is used to gather the data from the sensors, followed by transmitting the data into a computer. A user interface is developed in LabVIEW, which presents the value of each sensor and display the waveform in real-time. Moreover, the data pattern of each gesture varies from different users due to the non-uniform subtle tendon movement. To overcome this challenge, Echo State Network (ESN), a supervised learning network, is applied to the data for calibrating different users. The results of gesture recognition show that the ESN has a good performance in multiple dimensional classifications. For experimental data collected from six participants, the proposed system classifies five gestures with an accuracy of 87.3%

    A Human-Machine Interface Using Electrical Impedance Tomography for Hand Prosthesis Control

    Get PDF
    This paper presents a human-machine interface that establishes a link between the user and a hand prosthesis. It successfully uses electrical impedance tomography, a conventional bio-impedance imaging technique, using an array of electrodes contained in a wristband on the user's forearm. Using a high-performance analog front-end application specific integrated circuit (ASIC) the user's forearm inner bio-impedance redistribution is accurately assessed. These bio-signatures are strongly related to hand motions and using artificial neural networks, they can be learned so as to recognize the user's intention in real-time for prosthesis operation. In this work, eleven hand motions are designed for prosthesis operation with a gesture switching enabled sub-grouping method. Experiments with five subjects show that the system can achieve 98.5% accuracy with a grouping of three gestures and an accuracy of 94.4% with two sets of five gestures. The ASIC comprises a current driver with common-mode reduction capability and a current feedback instrumentation amplifier. The ASIC operates from ±\pm1.65 V power supplies, occupies an area of 0.07 mm2, and has a minimum bio-impedance sensitivity of 12.7 mΩp-p

    Advances in Integrated Circuits and Systems for Wearable Biomedical Electrical Impedance Tomography

    Get PDF
    Electrical impedance tomography (EIT) is an impedance mapping technique that can be used to image the inner impedance distribution of the subject under test. It is non-invasive, inexpensive and radiation-free, while at the same time it can facilitate long-term and real-time dynamic monitoring. Thus, EIT lends itself particularly well to the development of a bio-signal monitoring/imaging system in the form of wearable technology. This work focuses on EIT system hardware advancement using complementary metal oxide semiconductor (CMOS) technology. It presents the design and testing of application specific integrated circuit (ASIC) and their successful use in two bio-medical applications, namely, neonatal lung function monitoring and human-machine interface (HMI) for prosthetic hand control. Each year fifteen million babies are born prematurely, and up to 30% suffer from lung disease. Although respiratory support, especially mechanical ventilation, can improve their survival, it also can cause injury to their vulnerable lungs resulting in severe and chronic pulmonary morbidity lasting into adulthood, thus an integrated wearable EIT system for neonatal lung function monitoring is urgently needed. In this work, two wearable belt systems are presented. The first belt features a miniaturized active electrode module built around an analog front-end ASIC which is fabricated with 0.35-µm high-voltage process technology with ±9 V power supplies and occupies a total die area of 3.9 mm². The ASIC offers a high power active current driver capable of up to 6 mAp-p output, and wideband active buffer for EIT recording as well as contact impedance monitoring. The belt has a bandwidth of 500 kHz, and an image frame rate of 107 frame/s. To further improve the system, the active electrode module is integrated into one ASIC. It contains a fully differential current driver, a current feedback instrumentation amplifier (IA), a digital controller and multiplexors with a total die area of 9.6 mm². Compared to the conventional active electrode architecture employed in the first EIT belt, the second belt features a new architecture. It allows programmable flexible electrode current drive and voltage sense patterns under simple digital control. It has intimate connections to the electrodes for the current drive and to the IA for direct differential voltage measurement providing superior common-mode rejection ratio (CMRR) up to 74 dB, and with active gain, the noise level can be reduced by a factor of √3 using the adjacent scan. The second belt has a wider operating bandwidth of 1 MHz and multi-frequency operation. The image frame rate is 122 frame/s, the fastest wearable EIT reported to date. It measures impedance with 98% accuracy and has less than 0.5 Ω and 1° variation across all channels. In addition the ASIC facilitates several other functionalities to provide supplementary clinical information at the bedside. With the advancement of technology and the ever-increasing fusion of computer and machine into daily life, a seamless HMI system that can recognize hand gestures and motions and allow the control of robotic machines or prostheses to perform dexterous tasks, is a target of research. Originally developed as an imaging technique, EIT can be used with a machine learning technique to track bones and muscles movement towards understanding the human user’s intentions and ultimately controlling prosthetic hand applications. For this application, an analog front-end ASIC is designed using 0.35-µm standard process technology with ±1.65 V power supplies. It comprises a current driver capable of differential drive and a low noise (9μVrms) IA with a CMRR of 80 dB. The function modules occupy an area of 0.07 mm². Using the ASIC, a complete HMI system based on the EIT principle for hand prosthesis control has been presented, and the user’s forearm inner bio-impedance redistribution is assessed. Using artificial neural networks, bio-impedance redistribution can be learned so as to recognise the user’s intention in real-time for prosthesis operation. In this work, eleven hand motions are designed for prosthesis operation. Experiments with five subjects show that the system can achieve an overall recognition accuracy of 95.8%

    Wearable pressure sensing for intelligent gesture recognition

    Get PDF
    The development of wearable sensors has become a major area of interest due to their wide range of promising applications, including health monitoring, human motion detection, human-machine interfaces, electronic skin and soft robotics. Particularly, pressure sensors have attracted considerable attention in wearable applications. However, traditional pressure sensing systems are using rigid sensors to detect the human motions. Lightweight and flexible pressure sensors are required to improve the comfortability of devices. Furthermore, in comparison with conventional sensing techniques without smart algorithm, machine learning-assisted wearable systems are capable of intelligently analysing data for classification or prediction purposes, making the system ‘smarter’ for more demanding tasks. Therefore, combining flexible pressure sensors and machine learning is a promising method to deal with human motion recognition. This thesis focuses on fabricating flexible pressure sensors and developing wearable applications to recognize human gestures. Firstly, a comprehensive literature review was conducted, including current state-of-the-art on pressure sensing techniques and machine learning algorithms. Secondly, a piezoelectric smart wristband was developed to distinguish finger typing movements. Three machine learning algorithms, K Nearest Neighbour (KNN), Decision Tree (DT) and Support Vector Machine (SVM), were used to classify the movement of different fingers. The SVM algorithm outperformed other classifiers with an overall accuracy of 98.67% and 100% when processing raw data and extracted features. Thirdly, a piezoresistive wristband was fabricated based on a flake-sphere composite configuration in which reduced graphene oxide fragments are doped with polystyrene spheres to achieve both high sensitivity and flexibility. The flexible wristband measured the pressure distribution around the wrist for accurate and comfortable hand gesture classification. The intelligent wristband was able to classify 12 hand gestures with 96.33% accuracy for five participants using a machine learning algorithm. Moreover, for demonstrating the practical applications of the proposed method, a realtime system was developed to control a robotic hand according to the classification results. Finally, this thesis also demonstrates an intelligent piezoresistive sensor to recognize different throat movements during pronunciation. The piezoresistive sensor was fabricated using two PolyDimethylsiloxane (PDMS) layers that were coated with silver nanowires and reduced graphene oxide films, where the microstructures were fabricated by the polystyrene spheres between the layers. The highly sensitive sensor was able to distinguish throat vibrations from five different spoken words with an accuracy of 96% using the artificial neural network algorithm

    From wearable towards epidermal computing : soft wearable devices for rich interaction on the skin

    Get PDF
    Human skin provides a large, always available, and easy to access real-estate for interaction. Recent advances in new materials, electronics, and human-computer interaction have led to the emergence of electronic devices that reside directly on the user's skin. These conformal devices, referred to as Epidermal Devices, have mechanical properties compatible with human skin: they are very thin, often thinner than human hair; they elastically deform when the body is moving, and stretch with the user's skin. Firstly, this thesis provides a conceptual understanding of Epidermal Devices in the HCI literature. We compare and contrast them with other technical approaches that enable novel on-skin interactions. Then, through a multi-disciplinary analysis of Epidermal Devices, we identify the design goals and challenges that need to be addressed for advancing this emerging research area in HCI. Following this, our fundamental empirical research investigated how epidermal devices of different rigidity levels affect passive and active tactile perception. Generally, a correlation was found between the device rigidity and tactile sensitivity thresholds as well as roughness discrimination ability. Based on these findings, we derive design recommendations for realizing epidermal devices. Secondly, this thesis contributes novel Epidermal Devices that enable rich on-body interaction. SkinMarks contributes to the fabrication and design of novel Epidermal Devices that are highly skin-conformal and enable touch, squeeze, and bend sensing with co-located visual output. These devices can be deployed on highly challenging body locations, enabling novel interaction techniques and expanding the design space of on-body interaction. Multi-Touch Skin enables high-resolution multi-touch input on the body. We present the first non-rectangular and high-resolution multi-touch sensor overlays for use on skin and introduce a design tool that generates such sensors in custom shapes and sizes. Empirical results from two technical evaluations confirm that the sensor achieves a high signal-to-noise ratio on the body under various grounding conditions and has a high spatial accuracy even when subjected to strong deformations. Thirdly, Epidermal Devices are in contact with the skin, they offer opportunities for sensing rich physiological signals from the body. To leverage this unique property, this thesis presents rapid fabrication and computational design techniques for realizing Multi-Modal Epidermal Devices that can measure multiple physiological signals from the human body. Devices fabricated through these techniques can measure ECG (Electrocardiogram), EMG (Electromyogram), and EDA (Electro-Dermal Activity). We also contribute a computational design and optimization method based on underlying human anatomical models to create optimized device designs that provide an optimal trade-off between physiological signal acquisition capability and device size. The graphical tool allows for easily specifying design preferences and to visually analyze the generated designs in real-time, enabling designer-in-the-loop optimization. Experimental results show high quantitative agreement between the prediction of the optimizer and experimentally collected physiological data. Finally, taking a multi-disciplinary perspective, we outline the roadmap for future research in this area by highlighting the next important steps, opportunities, and challenges. Taken together, this thesis contributes towards a holistic understanding of Epidermal Devices}: it provides an empirical and conceptual understanding as well as technical insights through contributions in DIY (Do-It-Yourself), rapid fabrication, and computational design techniques.Die menschliche Haut bietet eine große, stets verfügbare und leicht zugängliche Fläche für Interaktion. Jüngste Fortschritte in den Bereichen Materialwissenschaft, Elektronik und Mensch-Computer-Interaktion (Human-Computer-Interaction, HCI) [so that you can later use the Englisch abbreviation] haben zur Entwicklung elektronischer Geräte geführt, die sich direkt auf der Haut des Benutzers befinden. Diese sogenannten Epidermisgeräte haben mechanische Eigenschaften, die mit der menschlichen Haut kompatibel sind: Sie sind sehr dünn, oft dünner als ein menschliches Haar; sie verformen sich elastisch, wenn sich der Körper bewegt, und dehnen sich mit der Haut des Benutzers. Diese Thesis bietet, erstens, ein konzeptionelles Verständnis von Epidermisgeräten in der HCI-Literatur. Wir vergleichen sie mit anderen technischen Ansätzen, die neuartige Interaktionen auf der Haut ermöglichen. Dann identifizieren wir durch eine multidisziplinäre Analyse von Epidermisgeräten die Designziele und Herausforderungen, die angegangen werden müssen, um diesen aufstrebenden Forschungsbereich voranzubringen. Im Anschluss daran untersuchten wir in unserer empirischen Grundlagenforschung, wie epidermale Geräte unterschiedlicher Steifigkeit die passive und aktive taktile Wahrnehmung beeinflussen. Im Allgemeinen wurde eine Korrelation zwischen der Steifigkeit des Geräts und den taktilen Empfindlichkeitsschwellen sowie der Fähigkeit zur Rauheitsunterscheidung festgestellt. Basierend auf diesen Ergebnissen leiten wir Designempfehlungen für die Realisierung epidermaler Geräte ab. Zweitens trägt diese Thesis zu neuartigen Epidermisgeräten bei, die eine reichhaltige Interaktion am Körper ermöglichen. SkinMarks trägt zur Herstellung und zum Design neuartiger Epidermisgeräte bei, die hochgradig an die Haut angepasst sind und Berührungs-, Quetsch- und Biegesensoren mit gleichzeitiger visueller Ausgabe ermöglichen. Diese Geräte können an sehr schwierigen Körperstellen eingesetzt werden, ermöglichen neuartige Interaktionstechniken und erweitern den Designraum für die Interaktion am Körper. Multi-Touch Skin ermöglicht hochauflösende Multi-Touch-Eingaben am Körper. Wir präsentieren die ersten nicht-rechteckigen und hochauflösenden Multi-Touch-Sensor-Overlays zur Verwendung auf der Haut und stellen ein Design-Tool vor, das solche Sensoren in benutzerdefinierten Formen und Größen erzeugt. Empirische Ergebnisse aus zwei technischen Evaluierungen bestätigen, dass der Sensor auf dem Körper unter verschiedenen Bedingungen ein hohes Signal-Rausch-Verhältnis erreicht und eine hohe räumliche Auflösung aufweist, selbst wenn er starken Verformungen ausgesetzt ist. Drittens, da Epidermisgeräte in Kontakt mit der Haut stehen, bieten sie die Möglichkeit, reichhaltige physiologische Signale des Körpers zu erfassen. Um diese einzigartige Eigenschaft zu nutzen, werden in dieser Arbeit Techniken zur schnellen Herstellung und zum computergestützten Design von multimodalen Epidermisgeräten vorgestellt, die mehrere physiologische Signale des menschlichen Körpers messen können. Die mit diesen Techniken hergestellten Geräte können EKG (Elektrokardiogramm), EMG (Elektromyogramm) und EDA (elektrodermale Aktivität) messen. Darüber hinaus stellen wir eine computergestützte Design- und Optimierungsmethode vor, die auf den zugrunde liegenden anatomischen Modellen des Menschen basiert, um optimierte Gerätedesigns zu erstellen. Diese Designs bieten einen optimalen Kompromiss zwischen der Fähigkeit zur Erfassung physiologischer Signale und der Größe des Geräts. Das grafische Tool ermöglicht die einfache Festlegung von Designpräferenzen und die visuelle Analyse der generierten Designs in Echtzeit, was eine Optimierung durch den Designer im laufenden Betrieb ermöglicht. Experimentelle Ergebnisse zeigen eine hohe quantitative Übereinstimmung zwischen den Vorhersagen des Optimierers und den experimentell erfassten physiologischen Daten. Schließlich skizzieren wir aus einer multidisziplinären Perspektive einen Fahrplan für zukünftige Forschung in diesem Bereich, indem wir die nächsten wichtigen Schritte, Möglichkeiten und Herausforderungen hervorheben. Insgesamt trägt diese Arbeit zu einem ganzheitlichen Verständnis von Epidermisgeräten bei: Sie liefert ein empirisches und konzeptionelles Verständnis sowie technische Einblicke durch Beiträge zu DIY (Do-It-Yourself), schneller Fertigung und computergestützten Entwurfstechniken

    Classification of Myopotentials of Hand's Motion to Control Applications

    Get PDF
    Import 23/08/2017V této diplomové práci je realizován systém pro klasifikaci myopotenciálů gest ruky. Prvním cílem bylo vytvořit hardware, který by byl schopen přenést nezarušený a správně zesílený signál myopotenciálů svalů ke zpracování do PC. Druhým cílem bylo naprogramovat algoritmus, který myopotenciály klasifikuje do určených gest ruky. Kombinací filtrů 2. řádu a správného zesílení byl vytvořen hardwarový prototyp obsahující čtyři měřící kanály pro snímání myopotenciálů. Z důvodu použití aktivních elektrod je uživatel galvanicky oddělen od zdroje. Pro digitalizaci a přenos dat byl vybrán mikrokontrolér Arduino Nano a naprogramován dle vytvořeného komunikačního protokolu. Programování počítačové aplikace je realizováno v jazyce C#. Zpracování signálu a grafické zobrazení měřeného signálu probíhá v reálném čase. Dle algoritmu adaptivní segmentace je zjišťována hranice provedeného gesta. Pomocí navržených fuzzy množin a systému váhování je určeno jedno z pěti (nebo žádné) gest ruky, které bylo provedeno.Realization of the system for classification of hand’s gestures is described in this master’s thesis. The first goal was to create hardware that would be able to measure signal of myopotentials for computer analysis without external noise and with right amplification. The second goal was to program an algorithm which could classify specific gestures of hand. Hardware prototype of four measuring channels was created by combination of 2nd order filters and right amount amplification. The user is isolated from the power source using galvanic isolation because of usage of active electrodes. For digitizing the data, the Arduino Nano microcontroller was selected and programed using defined communication protocol. The computer software is programed in C# programming language. Signal processing and drawing to user interface is in real time. The one of five possible gestures that user made is chosen using fuzzy logic and designed system of scaling.450 - Katedra kybernetiky a biomedicínského inženýrstvívelmi dobř

    The "Federica" hand: a simple, very efficient prothesis

    Get PDF
    Hand prostheses partially restore hand appearance and functionalities. Not everyone can afford expensive prostheses and many low-cost prostheses have been proposed. In particular, 3D printers have provided great opportunities by simplifying the manufacturing process and reducing costs. Generally, active prostheses use multiple motors for fingers movement and are controlled by electromyographic (EMG) signals. The "Federica" hand is a single motor prosthesis, equipped with an adaptive grasp and controlled by a force-myographic signal. The "Federica" hand is 3D printed and has an anthropomorphic morphology with five fingers, each consisting of three phalanges. The movement generated by a single servomotor is transmitted to the fingers by inextensible tendons that form a closed chain; practically, no springs are used for passive hand opening. A differential mechanical system simultaneously distributes the motor force in predefined portions on each finger, regardless of their actual positions. Proportional control of hand closure is achieved by measuring the contraction of residual limb muscles by means of a force sensor, replacing the EMG. The electrical current of the servomotor is monitored to provide the user with a sensory feedback of the grip force, through a small vibration motor. A simple Arduino board was adopted as processing unit. The differential mechanism guarantees an efficient transfer of mechanical energy from the motor to the fingers and a secure grasp of any object, regardless of its shape and deformability. The force sensor, being extremely thin, can be easily embedded into the prosthesis socket and positioned on both muscles and tendons; it offers some advantages over the EMG as it does not require any electrical contact or signal processing to extract information about the muscle contraction intensity. The grip speed is high enough to allow the user to grab objects on the fly: from the muscle trigger until to the complete hand closure, "Federica" takes about half a second. The cost of the device is about 100 US$. Preliminary tests carried out on a patient with transcarpal amputation, showed high performances in controlling the prosthesis, after a very rapid training session. The "Federica" hand turned out to be a lightweight, low-cost and extremely efficient prosthesis. The project is intended to be open-source: all the information needed to produce the prosthesis (e.g. CAD files, circuit schematics, software) can be downloaded from a public repository. Thus, allowing everyone to use the "Federica" hand and customize or improve it

    Requirement analysis and sensor specifications – First version

    Get PDF
    In this first version of the deliverable, we make the following contributions: to design the WEKIT capturing platform and the associated experience capturing API, we use a methodology for system engineering that is relevant for different domains such as: aviation, space, and medical and different professions such as: technicians, astronauts, and medical staff. Furthermore, in the methodology, we explore the system engineering process and how it can be used in the project to support the different work packages and more importantly the different deliverables that will follow the current. Next, we provide a mapping of high level functions or tasks (associated with experience transfer from expert to trainee) to low level functions such as: gaze, voice, video, body posture, hand gestures, bio-signals, fatigue levels, and location of the user in the environment. In addition, we link the low level functions to their associated sensors. Moreover, we provide a brief overview of the state-of-the-art sensors in terms of their technical specifications, possible limitations, standards, and platforms. We outline a set of recommendations pertaining to the sensors that are most relevant for the WEKIT project taking into consideration the environmental, technical and human factors described in other deliverables. We recommend Microsoft Hololens (for Augmented reality glasses), MyndBand and Neurosky chipset (for EEG), Microsoft Kinect and Lumo Lift (for body posture tracking), and Leapmotion, Intel RealSense and Myo armband (for hand gesture tracking). For eye tracking, an existing eye-tracking system can be customised to complement the augmented reality glasses, and built-in microphone of the augmented reality glasses can capture the expert’s voice. We propose a modular approach for the design of the WEKIT experience capturing system, and recommend that the capturing system should have sufficient storage or transmission capabilities. Finally, we highlight common issues associated with the use of different sensors. We consider that the set of recommendations can be useful for the design and integration of the WEKIT capturing platform and the WEKIT experience capturing API to expedite the time required to select the combination of sensors which will be used in the first prototype.WEKI
    corecore