147 research outputs found

    Detecting, locating and recognising human touches in social robots with contact microphones

    Get PDF
    There are many situations in our daily life where touch gestures during natural human–human interaction take place: meeting people (shaking hands), personal relationships (caresses), moments of celebration or sadness (hugs), etc. Considering that robots are expected to form part of our daily life in the future, they should be endowed with the capacity of recognising these touch gestures and the part of its body that has been touched since the gesture’s meaning may differ. Therefore, this work presents a learning system for both purposes: detect and recognise the type of touch gesture (stroke, tickle, tap and slap) and its localisation. The interpretation of the meaning of the gesture is out of the scope of this paper. Different technologies have been applied to perceive touch by a social robot, commonly using a large number of sensors. Instead, our approach uses 3 contact microphones installed inside some parts of the robot. The audio signals generated when the user touches the robot are sensed by the contact microphones and processed using Machine Learning techniques. We acquired information from sensors installed in two social robots, Maggie and Mini (both developed by the RoboticsLab at the Carlos III University of Madrid), and a real-time version of the whole system has been deployed in the robot Mini. The system allows the robot to sense if it has been touched or not, to recognise the kind of touch gesture, and its approximate location. The main advantage of using contact microphones as touch sensors is that by using just one, it is possible to “cover” a whole solid part of the robot. Besides, the sensors are unaffected by ambient noises, such as human voice, TV, music etc. Nevertheless, the fact of using several contact microphones makes possible that a touch gesture is detected by all of them, and each may recognise a different gesture at the same time. The results show that this system is robust against this phenomenon. Moreover, the accuracy obtained for both robots is about 86%.The research leading to these results has received funding from the projects: ‘‘Robots Sociales para Estimulación Física, Cognitiva y Afectiva de Mayores (ROSES)’’, funded by the Spanish "Ministerio de Ciencia, Innovación y Universidades, Spain" and from RoboCity2030-DIH-CM, Madrid Robotics Digital Innovation Hub, S2018/NMT-4331, funded by ‘"Programas de Actividades I+D en la Comunidad de Madrid’" and cofunded by Structural Funds of the EU, Slovak Republic.Publicad

    Comparison between low-cost and high-end sEMG sensors for the control of a transradial myoelectric prosthesis

    Get PDF
    Tese de mestrado integrado em Engenharia Biomédica e Biofísica, apresentada à Universidade de Lisboa, através da Faculdade de Ciências, 2017A amputação é algo pode mudar completamente a vida de qualquer indivíduo. A autonomia para executar tarefas do quotidiano, que a maioria de nós toma como garantidas, é drasticamente diminuída. Para além da dificuldade acrescida neste tipo de tarefas, a autoconfiança do individuo também sofre um duro decréscimo, podendo até originar situações de depressão. Por todas estas razões, a qualidade de vida de um amputado transradial é severamente afetada de forma negativa. Felizmente, atualmente já existem vários tipos de soluções prostéticas para tentar lidar com todos os obstáculos consequentes de uma amputação. Entre estas, encontram-se as próteses mioelétricas. Este tipo de próteses pode recorrer ao uso de algoritmos de reconhecimento de padrões para associar certos padrões observados em sinais de sEMG provenientes do coto a diferentes gestos de mão, oferecendo a possibilidade ao amputado transradial de restaurar alguma da sua autonomia utilizando um dispositivo com funcionalidades semelhantes à mão humana. Porém, existem alguns obstáculos relacionados com a acessibilidade destes dispositivos, mais especificamente, o preço. Atualmente, os preços das próteses mioelétricas comercialmente disponíveis são demasiados elevados, o que constitui um grande contratempo para indivíduos economicamente desfavorecidos que vivem com amputação transradial. Existe, portanto, a necessidade de diminuir os custos de produção e, consequentemente, o preço de mercado. No entanto, já existem alguns esforços a serem efetuados para tentar diminuir estes valores, tal como a impressão de algumas componentes em 3D. Para atingir este fim, também pode ser possível o uso de sensores de sEMG de baixo custo, ao invés de sensores sEMG de ponta. Porém, é necessário assegurar que a performance de controlo de uma prótese mioelétrica atingida pelo uso de sensores de baixo custo possa ser tão boa, ou superior à atingida pelo uso de sensores de ponta. Este é precisamente o grande foco desta dissertação. Para efetuar esta comparação, recorreu-se ao uso do Myo Armband e sensores da marca OttoBock. O Myo Armband é uma bracelete comercial de baixo custo que permite o controlo de aplicações multimédia e contém oito sensores de sEMG. Por outro lado, os sensores da OttoBock são os elétrodos de eleição para aplicações prostéticas. Estes dois tipos de sensores foram aplicados em dois sistemas sEMG distintos e duas experiências foram efetuadas de modo a avaliar a performance de cada um. Na primeira experiência foram efetuadas medições de sEMG nos antebraços de nove sujeitos saudáveis, com uso de ambos os sistemas. Foram usados diferentes algoritmos de reconhecimento de padrões para classificar segmentos do sinal sEMG correspondente a quatro gestos de mão diferentes. Em cada um dos sistemas foram usados cinco sensores. A experiência foi dividida em duas sessões. O protocolo seguido em cada uma das sessões foi exatamente o mesmo e a aquisição de dados foi realizada de forma contínua. Foi pedido a cada um dos sujeitos para visualizarem um vídeo e replicar cada um dos gestos mostrados neste mesmo. Cada um dos quatro gestos selecionados foi repetido 10 vezes, durante 10 segundos. Este procedimento foi repetido para cada um dos sistemas em cada uma das sessões. Embora cada gesto tenha sido registrado durante 10 segundos, apenas os últimos 6 segundos foram usados para classificação. Isto foi feito com o intuito de usar apenas o sinal de sEMG estável e não o transiente, que é originado pelo movimento do sujeito entre diferentes gestos. Diferentes técnicas de processamento de sinal e de extração de features foram aplicadas aos sinais adquiridos. Os dados obtidos, por sua vez, foram classificados por seis algoritmos diferentes, incluindo Linear Discriminant Analysis, Naïve Bayes, k Nearest Neighbours e três variações de Support Vector Machines. Esta experiência teve, portanto, o propósito de avaliar quais poderiam ser as combinações mais favoráveis entre diferentes técnicas de processamento de sinal e classificadores, de forma a obter a máxima precisão de classificação possível. Para avaliar as precisões calculadas, foram utilizados dois métodos de avaliação: 10-fold cross-validation e treino-teste. Os testes estatísticos efetuados aos resultados adquiridos demonstraram a inexistência de quaisquer diferenças significativas entre ambos os sistemas, o que valida a hipótese principal proposta por esta dissertação. No entanto, é necessário validar esta mesma hipótese com dados extraídos de amputados transradiais, os utilizadores finais deste tipo de sistemas. Na segunda experiência, as medições de sEMG foram efetuadas a doze amputados transradiais e a doze sujeitos saudáveis. Nesta experiência, em semelhança à primeira, também se realizaram duas sessões com protocolo igual. Contundo, comparativamente à experiência anterior, o protocolo usado sofreu algumas alterações. O número de sensores usados em cada um dos sistemas foi incrementado para oito e o número de gestos de mão foi aumentado para cinco. Os dados foram adquiridos de forma descontínua e a duração de cada aquisição realizada para cada gesto foi alterada para 2 segundos, de forma a obter apenas o sinal sEMG estável. Foram feitas 15 aquisições para cada um dos cinco gestos de mão, o que perfaz um total de 75 aquisições. As combinações de técnicas de processamento de sinal e classificadores usados nesta experiência foram selecionados de acordo com os resultados da primeira. No total, foram usadas quatro diferentes combinações de técnicas de processamento de sinal, retiradas das seis usadas na experiência anterior, e dois classificadores, uma das variações da Support Vector Machine e k Nearest Neighbours. As precisões calculadas voltaram a ser avaliadas novamente por meio de 10-fold cross-validation e de avaliação treino-teste. Os resultados obtidos demonstraram a inexistência de diferenças significativas entre as precisões adquiridas para cada um dos sistemas, exceto segundo os resultados da cross-validation. Neste caso, o sistema da OttoBock permitiu o cálculo de precisões superiores às obtidas pelo sistema da Myo Armband. Contundo, as precisões deste último demonstraram ser bastante competitivas. Nos resultados adquiridos, verificaram-se valores de precisão mais elevados no caso dos sujeitos saudáveis, em ambos os sistemas. Isto seria algo previsível, já que a não utilização diária do membro fantasma (a sensação de que membro amputado está ainda presente) leva a que o amputado se “esqueça” de como efetuava certos gestos com a mão que foi amputada. De um modo geral, pode-se afirmar que não se verificaram diferenças significativas entre os resultados obtidos em ambos os sistemas, o que valida a hipótese principal proposta por esta dissertação. De facto, os sensores de baixo custo usados permitiram resultados de classificação tão bons como os obtidos com o uso de sensores de ponta. Contudo, é de notar que isto é apenas possível com uso de algumas técnicas de processamento ao sinal aos dados obtidos pelos sensores da Myo, nomeadamente a aplicação de um envelope e de um filtro passa-baixo com uma frequência de corte de 1 Hz. Sem qualquer tipo de processamento, os resultados obtidos com estes sensores foram bastante fracos. Por outro lado, os sensores da OttoBock, mesmo sem qualquer tipo de processamento de sinal, permitiram resultados bastante elevados, o que se deve ao facto de produzirem um sinal previamente filtrado, com envelope e amplificado, ou seja, um sinal de alta qualidade. Considerando os resultados obtidos, é de facto possível que a aplicação de sensores de baixo custo a um sistema de controlo de uma prótese mioelétrica possa permitir uma performance tão boa como a oferecida por sensores de ponta. Contudo, isto é apenas possível se o processamento de sinal usado for apropriado, assim como o classificador escolhido. Em suma, é possível a substituição dos sensores atualmente usados em aplicações prostéticas por sensores com um custo mais reduzido, de modo a obter dispositivos mais económicos sem comprometer a qualidade do seu funcionamento. No entanto, antes destes sensores serem aplicados numa prótese mioelétrica, é necessário testar o sistema em tempo real e desenhar uma estratégia de controlo robusta, que permita uma boa comunicação entre as intenções do utilizador e as funcionalidades inerentes da prótese.The loss of a hand due to amputation can completely change anyone’s life. The autonomy to perform daily life tasks, which most of us take for granted, is drastically reduced, as well as one’s quality of life. Fortunately, the use of a myoelectric prosthesis can help in overcoming such problems a transradial amputee must face every day. However, the current cost of such devices can limit its accessibility to economically less favored people. In this dissertation, it is hypothesized that low-cost sensors can have a performance in controlling a myoelectric prosthesis as good as, or even better than the high-end sensors that are currently used in such applications. If this hypothesis can be validated, it may help in decreasing the costs of a myoelectric prosthesis and making it more accessible for the final user, the transradial amputee. To compare both types of sensors, two experimental sessions were performed. The first one was performed only on able-bodied subjects and it had the objective of selecting the best combination of signal processing techniques and classifiers in order to use on the obtained sEMG signals. In the second experiment, sEMG measurements were performed on both able-bodied and transradial amputated subjects. The signal processing techniques and classifiers that allowed to obtain the best results in the first experiment were used to classify the acquired data from all the subjects. Overall, the accuracies calculated with the usage of the low-cost sensors, using some of the signal processing techniques, proved not to be significantly different from the ones obtained with the usage of the high-end sensors. This indicates that the usage of low-cost sensors in systems to control a myoelectrical prosthesis might indeed provide a performance as efficient as high-end sensor. Besides, it may provide the possibility to lower the overall cost of the currently available devices

    Fused mechanomyography and inertial measurement for human-robot interface

    Get PDF
    Human-Machine Interfaces (HMI) are the technology through which we interact with the ever-increasing quantity of smart devices surrounding us. The fundamental goal of an HMI is to facilitate robot control through uniting a human operator as the supervisor with a machine as the task executor. Sensors, actuators, and onboard intelligence have not reached the point where robotic manipulators may function with complete autonomy and therefore some form of HMI is still necessary in unstructured environments. These may include environments where direct human action is undesirable or infeasible, and situations where a robot must assist and/or interface with people. Contemporary literature has introduced concepts such as body-worn mechanical devices, instrumented gloves, inertial or electromagnetic motion tracking sensors on the arms, head, or legs, electroencephalographic (EEG) brain activity sensors, electromyographic (EMG) muscular activity sensors and camera-based (vision) interfaces to recognize hand gestures and/or track arm motions for assessment of operator intent and generation of robotic control signals. While these developments offer a wealth of future potential their utility has been largely restricted to laboratory demonstrations in controlled environments due to issues such as lack of portability and robustness and an inability to extract operator intent for both arm and hand motion. Wearable physiological sensors hold particular promise for capture of human intent/command. EMG-based gesture recognition systems in particular have received significant attention in recent literature. As wearable pervasive devices, they offer benefits over camera or physical input systems in that they neither inhibit the user physically nor constrain the user to a location where the sensors are deployed. Despite these benefits, EMG alone has yet to demonstrate the capacity to recognize both gross movement (e.g. arm motion) and finer grasping (e.g. hand movement). As such, many researchers have proposed fusing muscle activity (EMG) and motion tracking e.g. (inertial measurement) to combine arm motion and grasp intent as HMI input for manipulator control. However, such work has arguably reached a plateau since EMG suffers from interference from environmental factors which cause signal degradation over time, demands an electrical connection with the skin, and has not demonstrated the capacity to function out of controlled environments for long periods of time. This thesis proposes a new form of gesture-based interface utilising a novel combination of inertial measurement units (IMUs) and mechanomyography sensors (MMGs). The modular system permits numerous configurations of IMU to derive body kinematics in real-time and uses this to convert arm movements into control signals. Additionally, bands containing six mechanomyography sensors were used to observe muscular contractions in the forearm which are generated using specific hand motions. This combination of continuous and discrete control signals allows a large variety of smart devices to be controlled. Several methods of pattern recognition were implemented to provide accurate decoding of the mechanomyographic information, including Linear Discriminant Analysis and Support Vector Machines. Based on these techniques, accuracies of 94.5% and 94.6% respectively were achieved for 12 gesture classification. In real-time tests, accuracies of 95.6% were achieved in 5 gesture classification. It has previously been noted that MMG sensors are susceptible to motion induced interference. The thesis also established that arm pose also changes the measured signal. This thesis introduces a new method of fusing of IMU and MMG to provide a classification that is robust to both of these sources of interference. Additionally, an improvement in orientation estimation, and a new orientation estimation algorithm are proposed. These improvements to the robustness of the system provide the first solution that is able to reliably track both motion and muscle activity for extended periods of time for HMI outside a clinical environment. Application in robot teleoperation in both real-world and virtual environments were explored. With multiple degrees of freedom, robot teleoperation provides an ideal test platform for HMI devices, since it requires a combination of continuous and discrete control signals. The field of prosthetics also represents a unique challenge for HMI applications. In an ideal situation, the sensor suite should be capable of detecting the muscular activity in the residual limb which is naturally indicative of intent to perform a specific hand pose and trigger this post in the prosthetic device. Dynamic environmental conditions within a socket such as skin impedance have delayed the translation of gesture control systems into prosthetic devices, however mechanomyography sensors are unaffected by such issues. There is huge potential for a system like this to be utilised as a controller as ubiquitous computing systems become more prevalent, and as the desire for a simple, universal interface increases. Such systems have the potential to impact significantly on the quality of life of prosthetic users and others.Open Acces

    Machine Learning in Sensors and Imaging

    Get PDF
    Machine learning is extending its applications in various fields, such as image processing, the Internet of Things, user interface, big data, manufacturing, management, etc. As data are required to build machine learning networks, sensors are one of the most important technologies. In addition, machine learning networks can contribute to the improvement in sensor performance and the creation of new sensor applications. This Special Issue addresses all types of machine learning applications related to sensors and imaging. It covers computer vision-based control, activity recognition, fuzzy label classification, failure classification, motor temperature estimation, the camera calibration of intelligent vehicles, error detection, color prior model, compressive sensing, wildfire risk assessment, shelf auditing, forest-growing stem volume estimation, road management, image denoising, and touchscreens

    Extending the Design Space of E-textile Assistive Smart Environment Applications

    Get PDF
    The thriving field of Smart Environments has allowed computing devices to gain new capabilities and develop new interfaces, thus becoming more and more part of our lives. In many of these areas it is unthinkable to renounce to the assisting functionality such as e.g. comfort and safety functions during driving, safety functionality while working in an industrial plant, or self-optimization of daily activities with a Smartwatch. Adults spend a lot of time on flexible surfaces such as in the office chair, in bed or in the car seat. These are crucial parts of our environments. Even though environments have become smarter with integrated computing gaining new capabilities and new interfaces, mostly rigid surfaces and objects have become smarter. In this thesis, I build on the advantages flexible and bendable surfaces have to offer and look into the creation process of assistive Smart Environment applications leveraging these surfaces. I have done this with three main contributions. First, since most Smart Environment applications are built-in into rigid surfaces, I extend the body of knowledge by designing new assistive applications integrated in flexible surfaces such as comfortable chairs, beds, or any type of soft, flexible objects. These developed applications offer assistance e.g. through preventive functionality such as decubitus ulcer prevention while lying in bed, back pain prevention while sitting on a chair or emotion detection while detecting movements on a couch. Second, I propose a new framework for the design process of flexible surface prototypes and its challenges of creating hardware prototypes in multiple iterations, using resources such as work time and material costs. I address this research challenge by creating a simulation framework which can be used to design applications with changing surface shape. In a first step I validate the simulation framework by building a real prototype and a simulated prototype and compare the results in terms of sensor amount and sensor placement. Furthermore, I use this developed simulation framework to analyse the influence it has on an application design if the developer is experienced or not. Finally, since sensor capabilities play a major role during the design process, and humans come often in contact with surfaces made of fabric, I combine the integration advantages of fabric and those of capacitive proximity sensing electrodes. By conducting a multitude of capacitive proximity sensing measurements, I determine the performance of electrodes made by varying properties such as material, shape, size, pattern density, stitching type, or supporting fabric. I discuss the results from this performance evaluation and condense them into e-textile capacitive sensing electrode guidelines, applied exemplary on the use case of creating a bed sheet for breathing rate detection

    Development of new intelligent autonomous robotic assistant for hospitals

    Get PDF
    Continuous technological development in modern societies has increased the quality of life and average life-span of people. This imposes an extra burden on the current healthcare infrastructure, which also creates the opportunity for developing new, autonomous, assistive robots to help alleviate this extra workload. The research question explored the extent to which a prototypical robotic platform can be created and how it may be implemented in a hospital environment with the aim to assist the hospital staff with daily tasks, such as guiding patients and visitors, following patients to ensure safety, and making deliveries to and from rooms and workstations. In terms of major contributions, this thesis outlines five domains of the development of an actual robotic assistant prototype. Firstly, a comprehensive schematic design is presented in which mechanical, electrical, motor control and kinematics solutions have been examined in detail. Next, a new method has been proposed for assessing the intrinsic properties of different flooring-types using machine learning to classify mechanical vibrations. Thirdly, the technical challenge of enabling the robot to simultaneously map and localise itself in a dynamic environment has been addressed, whereby leg detection is introduced to ensure that, whilst mapping, the robot is able to distinguish between people and the background. The fourth contribution is geometric collision prediction into stabilised dynamic navigation methods, thus optimising the navigation ability to update real-time path planning in a dynamic environment. Lastly, the problem of detecting gaze at long distances has been addressed by means of a new eye-tracking hardware solution which combines infra-red eye tracking and depth sensing. The research serves both to provide a template for the development of comprehensive mobile assistive-robot solutions, and to address some of the inherent challenges currently present in introducing autonomous assistive robots in hospital environments.Open Acces

    Multiple Action Recognition for Video Games (MARViG)

    Get PDF
    Action recognition research historically has focused on increasing accuracy on datasets in highly controlled environments. Perfect or near perfect offline action recognition accuracy on scripted datasets has been achieved. The aim of this thesis is to deal with the more complex problem of online action recognition with low latency in real world scenarios. To fulfil this aim two new multi-modal gaming datasets were captured and three novel algorithms for online action recognition were proposed. Two new gaming datasets, G3D and G3Di for real-time action recognition with multiple actions and multi-modal data were captured and publicly released. Furthermore, G3Di was captured using a novel game-sourcing method so the actions are realistic. Three novel algorithms for online action recognition with low latency were proposed. Firstly, Dynamic Feature Selection, which combines the discriminative power of Random Forests for feature selection with an ensemble of AdaBoost classifiers for dynamic classification. Secondly, Clustered Spatio-Temporal Manifolds, which modelled the dynamics of human actions with style invariant action templates that were combined with Dynamic Time Warping for execution rate invariance. Finally, a Hierarchical Transfer Learning framework, comprised of a novel transfer learning algorithm to detect compound actions in addition to hierarchical interaction detection to recognise the actions and interactions of multiple subjects. The proposed algorithms run in real-time with low latency ensuring they are suitable for a wide range of natural user interface applications including gaming. State-of-the art results were achieved for online action recognition. Experimental results indicate higher complexity of the G3Di dataset in comparison to the existing gaming datasets, highlighting the importance of this dataset for designing algorithms suitable for realistic interactive applications. This thesis has advanced the study of realistic action recognition and is expected to serve as a basis for further study within the research community

    Machine learning-based dexterous control of hand prostheses

    Get PDF
    Upper-limb myoelectric prostheses are controlled by muscle activity information recorded on the skin surface using electromyography (EMG). Intuitive prosthetic control can be achieved by deploying statistical and machine learning (ML) tools to decipher the user’s movement intent from EMG signals. This thesis proposes various means of advancing the capabilities of non-invasive, ML-based control of myoelectric hand prostheses. Two main directions are explored, namely classification-based hand grip selection and proportional finger position control using regression methods. Several practical aspects are considered with the aim of maximising the clinical impact of the proposed methodologies, which are evaluated with offline analyses as well as real-time experiments involving both able-bodied and transradial amputee participants. It has been generally accepted that the EMG signal may not always be a reliable source of control information for prostheses, mainly due to its stochastic and non-stationary properties. One particular issue associated with the use of surface EMG signals for upper-extremity myoelectric control is the limb position effect, which is related to the lack of decoding generalisation under novel arm postures. To address this challenge, it is proposed to make concurrent use of EMG sensors and inertial measurement units (IMUs). It is demonstrated this can lead to a significant improvement in both classification accuracy (CA) and real-time prosthetic control performance. Additionally, the relationship between surface EMG and inertial measurements is investigated and it is found that these modalities are partially related due to reflecting different manifestations of the same underlying phenomenon, that is, the muscular activity. In the field of upper-limb myoelectric control, the linear discriminant analysis (LDA) classifier has arguably been the most popular choice for movement intent decoding. This is mainly attributable to its ease of implementation, low computational requirements, and acceptable decoding performance. Nevertheless, this particular method makes a strong fundamental assumption, that is, data observations from different classes share a common covariance structure. Although this assumption may often be violated in practice, it has been found that the performance of the method is comparable to that of more sophisticated algorithms. In this thesis, it is proposed to remove this assumption by making use of general class-conditional Gaussian models and appropriate regularisation to avoid overfitting issues. By performing an exhaustive analysis on benchmark datasets, it is demonstrated that the proposed approach based on regularised discriminant analysis (RDA) can offer an impressive increase in decoding accuracy. By combining the use of RDA classification with a novel confidence-based rejection policy that intends to minimise the rate of unintended hand motions, it is shown that it is feasible to attain robust myoelectric grip control of a prosthetic hand by making use of a single pair of surface EMG-IMU sensors. Most present-day commercial prosthetic hands offer the mechanical abilities to support individual digit control; however, classification-based methods can only produce pre-defined grip patterns, a feature which results in prosthesis under-actuation. Although classification-based grip control can provide a great advantage over conventional strategies, it is far from being intuitive and natural to the user. A potential way of approaching the level of dexterity enjoyed by the human hand is via continuous and individual control of multiple joints. To this end, an exhaustive analysis is performed on the feasibility of reconstructing multidimensional hand joint angles from surface EMG signals. A supervised method based on the eigenvalue formulation of multiple linear regression (MLR) is then proposed to simultaneously reduce the dimensionality of input and output variables and its performance is compared to that of typically used unsupervised methods, which may produce suboptimal results in this context. An experimental paradigm is finally designed to evaluate the efficacy of the proposed finger position control scheme during real-time prosthesis use. This thesis provides insight into the capacity of deploying a range of computational methods for non-invasive myoelectric control. It contributes towards developing intuitive interfaces for dexterous control of multi-articulated prosthetic hands by transradial amputees
    corecore