4 research outputs found

    Convolutional Neural Network Array for Sign Language Recognition using Wearable IMUs

    Full text link
    Advancements in gesture recognition algorithms have led to a significant growth in sign language translation. By making use of efficient intelligent models, signs can be recognized with precision. The proposed work presents a novel one-dimensional Convolutional Neural Network (CNN) array architecture for recognition of signs from the Indian sign language using signals recorded from a custom designed wearable IMU device. The IMU device makes use of tri-axial accelerometer and gyroscope. The signals recorded using the IMU device are segregated on the basis of their context, such as whether they correspond to signing for a general sentence or an interrogative sentence. The array comprises of two individual CNNs, one classifying the general sentences and the other classifying the interrogative sentence. Performances of individual CNNs in the array architecture are compared to that of a conventional CNN classifying the unsegregated dataset. Peak classification accuracies of 94.20% for general sentences and 95.00% for interrogative sentences achieved with the proposed CNN array in comparison to 93.50% for conventional CNN assert the suitability of the proposed approach.Comment: https://doi.org/10.1109/SPIN.2019.871174

    Perfectionnement des algorithmes de contrôle-commande des robots manipulateur électriques en interaction physique avec leur environnement par une approche bio-inspirée

    Get PDF
    Automated production lines integrate robots which are isolated from workers, so there is no physical interaction between a human and robot. In the near future, a humanoid robot will become a part of the human environment as a companion to help or work with humans. The aspects of coexistence always presuppose physical and social interaction between a robot and a human. In humanoid robotics, further progress depends on knowledge of cognitive mechanisms of interpersonal interaction as robots physically and socially interact with humans. An illustrative example of interpersonal interaction is an act of a handshake that plays a substantial social role. The particularity of this form of interpersonal interaction is that it is based on physical and social couplings which lead to synchronization of motion and efforts. Studying a handshake for robots is interesting as it can expand their behavioral properties for interaction with a human being in more natural way. The first chapter of this thesis presents the state of the art in the fields of social sciences, medicine and humanoid robotics that study the phenomenon of a handshake. The second chapter is dedicated to the physical nature of the phenomenon between humans via quantitative measurements. A new wearable system to measure a handshake was built in Donetsk National Technical University (Ukraine). It consists of a set of several sensors attached to the glove for recording angular velocities and gravitational acceleration of the hand and forces in certain points of hand contact during interaction. The measurement campaigns have shown that there is a phenomenon of mutual synchrony that is preceded by the phase of physical contact which initiates this synchrony. Considering the rhythmic nature of this phenomenon, the controller based on the models of rhythmic neuron of Rowat-Selverston, with learning the frequency during interaction was proposed and studied in the third chapter. Chapter four deals with the experiences of physical human-robot interaction. The experimentations with robot arm Katana show that it is possible for a robot to learn to synchronize its rhythm with rhythms imposed by a human during handshake with the proposed model of a bio-inspired controller. A general conclusion and perspectives summarize and finish this work.Les robots intégrés aux chaînes de production sont généralement isolés des ouvriers et ne prévoient pas d'interaction physique avec les humains. Dans le futur, le robot humanoïde deviendra un partenaire pour vivre ou travailler avec les êtres humains. Cette coexistence prévoit l'interaction physique et sociale entre le robot et l'être humain. En robotique humanoïde les futurs progrès dépendront donc des connaissances dans les mécanismes cognitifs présents dans les interactions interpersonnelles afin que les robots interagissent avec les humains physiquement et socialement. Un bon exemple d'interaction interpersonnelle est l'acte de la poignée de la main qui possède un rôle social très important. La particularité de cette interaction est aussi qu'elle est basée sur un couplage physique et social qui induit une synchronisation des mouvements et des efforts. L'intérêt d'étudier la poignée de main pour les robots consiste donc à élargir leurs propriétés comportementales pour qu'ils interagissent avec les humains de manière plus habituelle.Cette thèse présente dans un premier chapitre un état de l'art sur les travaux dans les domaines des sciences humaines, de la médecine et de la robotique humanoïde qui sont liés au phénomène de la poignée de main. Le second chapitre, est consacré à la nature physique du phénomène de poignée de main chez l'être humain par des mesures quantitatives des mouvements. Pour cela un système de mesures a été construit à l'Université Nationale Technique de Donetsk (Ukraine). Il est composé d'un gant instrumenté par un réseau de capteurs portés qui permet l'enregistrement des vitesses et accélérations du poignet et les forces aux points de contact des paumes, lors de l'interaction. Des campagnes de mesures ont permis de montrer la présence d'un phénomène de synchronie mutuelle précédé d'une phase de contact physique qui initie cette synchronie. En tenant compte de cette nature rythmique, un contrôleur à base de neurones rythmiques de Rowat-Selverston, intégrant un mécanisme d'apprentissage de la fréquence d'interaction, est proposé et etudié dans le troisième chapitre pour commander un bras robotique. Le chapitre quatre est consacré aux expériences d'interaction physique homme/robot. Des expériences avec un bras robotique Katana montrent qu'il est possible d'apprendre à synchroniser la rythmicité du robot avec celle imposée par une per-sonne lors d'une poignée de main grâce à ce modèle de contrôleur bio-inspiré. Une conclusion générale dresse le bilan des travaux menés et propose des perspectives

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8Ëš to 6.4Ëš compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Métodos de classificação confiável e resiliente de movimentos de membros superiores baseado em extreme learning machines e sinais de eletromiografia de superfície

    Get PDF
    Apesar de avanços recentes, a classificação confiável de sinais de eletromiografia de superfície (sEMG) permanece uma tarefa árdua sob a perspectiva de Aprendizagem de Máquina. Sinais de sEMG possuem uma sobreposição de classes inerente à sua natureza, o que impede a separação perfeita das amostras e produz ruídos de classificação. Alternativas ao problema geralmente baseiam-se na filtragem do sEMG ou métodos de pós-processamento como o Major-Voting, soluções estas que necessariamente geram atrasos na classificação do sinal e frequentemente não geram melhoras substanciais. A abordagem deste trabalho baseia-se no desenvolvimento de métodos confiáveis e resilientes sob a perspectiva de classificação que gerem saídas mais estáveis e consistentes para o classificador baseado em Extreme Learning Machines (ELM) utilizado. Para tanto, métodos envolvendo o pré-processamento e pós-processamento, a suavização do arg max do classificador, thresholds adaptativos e um classificador binário auxiliar foram utilizados. Os sinais classificados derivam de 12 canais de sEMG envolvendo três bases de dados diferentes onde 99 ensaios compostos pela execução de 17 movimentos distintos do segmento mão-braço foram realizados. Nos melhores resultados, os métodos utilizados atingiram taxas de acerto médio global de 66,99 ± 23,6% para a base de voluntários amputados, 87,10 ± 5,89% para a base de voluntários não-amputados e taxas superiores a 99% para todas as variações de diferentes ensaios que compõe a base de dados adquirida em laboratório. Já para a taxa de acerto média ponderada por classes, nos melhores resultados foram de 53,36 ± 18,2% para a base de voluntários amputados, 77,94 ± 6,22% para a base de voluntários não-amputados e taxas superiores a 91% para os ensaios da base de dados adquirida em laboratório. Ambas as métricas de taxa de acerto consideradas superam ou equivalem-se a alternativas descritas na literatura, utilizando abordagens que não demandam grandes mudanças estruturais no classificador.Despite recent advances, reliable classification of surface electromyography (sEMG) signals remains an arduous task from the perspective of Machine Learning. sEMG signals have inherent class overlaps that prevent optimal labeling due to classification noises. Alternatives to classification ripples usually rely on stochastic sEMG filtering or post-processing methods, like Major-Voting, both solutions that insert constraints and additional delays in signal classification and often do not generate substantial improvements. The approach of this paper focuses on the development of reliable and resilient methods used in combination with an Extreme Learning Machines (ELM) classifier to generate more stable and consistent outputs. Methods of pre-processing and post-processing, a smoothed arg max version of the ELM, adaptive thresholds, and an auxiliary binary classifier were used to process signals derived from 12 EMG channels from three different databases. In total, 99 trials were performed, each one containing 17 different upper-limb movements. The proposed methods reached an average overall accuracy rate of 66.99 ± 23.6% for the amputee individuals’ database, 87.10 ± 5.89% for the non-amputee individuals’ database, and rates over 99% for all variations of our own lab-generated database. The average weighted accuracy rates were 53.36 ± 18.2% for the amputee individuals’ database, 77.94 ± 6.22% for the base of the non-amputee individuals’ database, and higher than 91% for the best-case scenario of our own lab-generated database. In both metrics considered, the results outperform, or match alternatives described in the literature using approaches that do not require significant changes in the classifier's architecture
    corecore