6 research outputs found

    FabricTouch: A Multimodal Fabric Assessment Touch Gesture Dataset to Slow Down Fast Fashion

    Get PDF
    Touch exploration of fabric is used to evaluate its properties, and it could further be leveraged to understand a consumer’s sensory experience and preference so as to support them in real time to make careful clothing purchase decisions. In this paper, we open up opportunities to explore the use of technology to provide such support with our FabricTouch dataset, i.e., a multimodal dataset of fabric assessment touch gestures. The dataset consists of bilateral forearm movement and muscle activity data captured while 15 people explored 114 different garments in total to evaluate them according to 5 properties (warmth, thickness, smoothness, softness, and flexibility). The dataset further includes subjective ratings of the garments with respect to each property and ratings of pleasure experienced in exploring the garment through touch. We further report baseline work on automatic detection. Our results suggest that it is possible to recognise the type of fabric property that a consumer is exploring based on their touch behaviour. We obtained mean F1 score of 0.61 for unseen garments, for 5 types of fabric property. The results also highlight the possibility of additionally recognizing the consumer’s subjective rating of the fabric when the property being rated is known, mean F1 score of 0.97 for unseen subjects, for 3 rating levels

    Spatial Information Enhances Myoelectric Control Performance with Only Two Channels

    Get PDF
    © 2018 IEEE. Personal use of this material is permitted. Permission from IEEE must be obtained for all other uses, in any current or future media, including reprinting/republishing this material for advertising or promotional purposes,creating new collective works, for resale or redistribution to servers or lists, or reuse of any copyrighted component of this work in other works.Automatic gesture recognition (AGR) is investigated as an effortless human-machine interaction method, potentially applied in many industrial sectors. When using surface electromyogram (sEMG) for AGR, i.e. myoelectric control, a minimum of four EMG channels are required. However, in practical applications, fewer number of electrodes is always preferred, particularly for mobile and wearable applications. No published research focused on how to improve the performance of a myoelectric system with only two sEMG channels. In this study, we presented a systematic investigation to fill this gap. Specifically, we demonstrated that through spatial filtering and electrode position optimization, the myoelectric control performance was significantly improved (p < 0.05) and similar to that with four electrodes. Further, we found a significant correlation between offline and online performance metrics in the two-channel system, indicating that offline performance was transferable to online performance, highly relevant for algorithm development for sEMG-based AGR applications.Natural Sciences and Engineering Research Council of Canada || (Discovery Grant 072169) National Natural Science Foundation of China || (Grant 51620105002 and 91748119) State Key Lab of Railway Control and Safety Open Topics Fund of China || (Grant RCS2017K008)

    From Unimodal to Multimodal: improving the sEMG-Based Pattern Recognition via deep generative models

    Full text link
    Multimodal hand gesture recognition (HGR) systems can achieve higher recognition accuracy. However, acquiring multimodal gesture recognition data typically requires users to wear additional sensors, thereby increasing hardware costs. This paper proposes a novel generative approach to improve Surface Electromyography (sEMG)-based HGR accuracy via virtual Inertial Measurement Unit (IMU) signals. Specifically, we trained a deep generative model based on the intrinsic correlation between forearm sEMG signals and forearm IMU signals to generate virtual forearm IMU signals from the input forearm sEMG signals at first. Subsequently, the sEMG signals and virtual IMU signals were fed into a multimodal Convolutional Neural Network (CNN) model for gesture recognition. To evaluate the performance of the proposed approach, we conducted experiments on 6 databases, including 5 publicly available databases and our collected database comprising 28 subjects performing 38 gestures, containing both sEMG and IMU data. The results show that our proposed approach outperforms the sEMG-based unimodal HGR method (with increases of 2.15%-13.10%). It demonstrates that incorporating virtual IMU signals, generated by deep generative models, can significantly enhance the accuracy of sEMG-based HGR. The proposed approach represents a successful attempt to transition from unimodal HGR to multimodal HGR without additional sensor hardware

    Wearable pressure sensing for intelligent gesture recognition

    Get PDF
    The development of wearable sensors has become a major area of interest due to their wide range of promising applications, including health monitoring, human motion detection, human-machine interfaces, electronic skin and soft robotics. Particularly, pressure sensors have attracted considerable attention in wearable applications. However, traditional pressure sensing systems are using rigid sensors to detect the human motions. Lightweight and flexible pressure sensors are required to improve the comfortability of devices. Furthermore, in comparison with conventional sensing techniques without smart algorithm, machine learning-assisted wearable systems are capable of intelligently analysing data for classification or prediction purposes, making the system ‘smarter’ for more demanding tasks. Therefore, combining flexible pressure sensors and machine learning is a promising method to deal with human motion recognition. This thesis focuses on fabricating flexible pressure sensors and developing wearable applications to recognize human gestures. Firstly, a comprehensive literature review was conducted, including current state-of-the-art on pressure sensing techniques and machine learning algorithms. Secondly, a piezoelectric smart wristband was developed to distinguish finger typing movements. Three machine learning algorithms, K Nearest Neighbour (KNN), Decision Tree (DT) and Support Vector Machine (SVM), were used to classify the movement of different fingers. The SVM algorithm outperformed other classifiers with an overall accuracy of 98.67% and 100% when processing raw data and extracted features. Thirdly, a piezoresistive wristband was fabricated based on a flake-sphere composite configuration in which reduced graphene oxide fragments are doped with polystyrene spheres to achieve both high sensitivity and flexibility. The flexible wristband measured the pressure distribution around the wrist for accurate and comfortable hand gesture classification. The intelligent wristband was able to classify 12 hand gestures with 96.33% accuracy for five participants using a machine learning algorithm. Moreover, for demonstrating the practical applications of the proposed method, a realtime system was developed to control a robotic hand according to the classification results. Finally, this thesis also demonstrates an intelligent piezoresistive sensor to recognize different throat movements during pronunciation. The piezoresistive sensor was fabricated using two PolyDimethylsiloxane (PDMS) layers that were coated with silver nanowires and reduced graphene oxide films, where the microstructures were fabricated by the polystyrene spheres between the layers. The highly sensitive sensor was able to distinguish throat vibrations from five different spoken words with an accuracy of 96% using the artificial neural network algorithm

    Développement d’algorithmes et d’outils logiciels pour l’assistance technique et le suivi en réadaptation

    Get PDF
    Ce mémoire présente deux projets de développement portant sur des algorithmes et des outils logiciels offrant des solutions pratiques à des problématiques courantes rencontrées en réadaptation. Le premier développement présenté est un algorithme de correspondance de séquence qui s’intègre à des interfaces de contrôle couramment utilisées en pratique. L’implémentation de cet algorithme offre une solution flexible pouvant s’adapter à n’importe quel utilisateur de technologies d’assistances. Le contrôle de tels appareils représente un défi de taille puisqu’ils ont, la plupart du temps, une dimensionnalité élevée (c-à-d. plusieurs degrés de liberté, modes ou commandes) et sont maniés à l’aide d’interfaces basées sur de capteurs de faible dimensionnalité offrant donc très peu de commandes physiques distinctes pour l’utilisateur. L’algorithme proposé se base donc sur de la reconnaissance de courts signaux temporels ayant la possibilité d’être agencés en séquences. L’éventail de combinaisons possibles augmente ainsi la dimensionnalité de l’interface. Deux applications de l’algorithme sont développées et testées. La première avec une interface de contrôle par le souffle pour un bras robotisé et la seconde pour une interface de gestes de la main pour le contrôle du clavier-souris d’un ordinateur. Le second développement présenté dans ce mémoire porte plutôt sur la collecte et l’analyse de données en réadaptation. Que ce soit en milieux cliniques, au laboratoires ou au domicile, nombreuses sont les situations où l’on souhaite récolter des données. La solution pour cette problématique se présente sous la forme d’un écosystème d’applications connectées incluant serveur et applications web, mobiles et embarquée. Ces outils logiciels sont développés sur mesure et offrent un procédé unique, peu coûteux, léger et rapide pour la collecte, la visualisation et la récupération de données. Ce manuscrit détaille une première version en décrivant l’architecture employée, les technologies utilisées et les raisons qui ont mené à ces choix tout en guidant les futures itérations.This Master’s thesis presents two development projects about algorithms and software tools providing practical solutions to commonly faced situations in rehabilitation context. The first project is the development of a sequence matching algorithm that can be integrated to the most commonly used control interfaces. The implementation of this algorithm provides a flexible solution that can be adapted to any assistive technology user. The control of such devices represents a challenge since their dimensionality is high (i.e., many degrees of freedom, modes, commands) and they are controlled with interfaces based on low-dimensionality sensors. Thus, the number of actual physical commands that the user can perform is low. The proposed algorithm is based on short time signals that can be organized into sequences. The multiple possible combinations then contribute to increasing the dimensionality of the interface. Two applications of the algorithm have been developed and tested. The first is a sip-and-puff control interface for a robotic assistive arm and the second is a hand gesture interface for the control of a computer’s mouse and keyboard. The second project presented in this document addresses the issue of collecting and analyzing data. In a rehabilitation’s clinical or laboratory environment, or at home, there are many situations that require gathering data. The proposed solution to this issue is a connected applications ecosystem that includes a web server and mobile, web and embedded applications. This custom-made software offers a unique, inexpensive, lightweight and fast workflow to visualize and retrieve data. The following document describes a first version by elaborating on the architecture, the technologies used, the reasons for those choices, and guide the next iterations
    corecore