13 research outputs found

    Measuring operator’s pain : toward evaluating Musculoskeletal disorder at work

    Get PDF
    Musculoskeletal disorders (MSDs) have affected an increasing number of people in the active general population. In this perspective, we developed a measuring tool taking muscle activities in certain regions of the body, standing posture taking the center of pressure under the feet and feet positions. This tool also comprises an instrumented helmet containing an electroencephalogram (EEG) to measure brain activity, and an accelerometer reporting the movements of the head. Then, our tool comprises both non-invasive instrumented insole and safety helmet. Moreover, the same tool measures muscular activities in specific regions of the body using an electromyogram (EMG). The aim is to combine all the data in order to identify consistent patterns between brain activity, postures, movements and muscle activity, and then, understand their connection to the development of MSDs. This paper presents three situations reported to be a risk for MSDs and an analysis of the signals is presented in order to differentiate adequate or abnormal posture

    A Smart Safety Helmet using IMU and EEG sensors for analysis of worker’s fatigue

    Get PDF
    It is known that head gesture and mental states can reflect some human behaviors related to a risk of accident when using machine-tools. The research works presented in this paper aim to reduce the number of injury and thus increase worker safety. Instead using camera, this paper presents a Smart Safety Helmet (SSH) in order to track head gestures and mental states of worker able to recognize anomalous behavior. Information extracted from SSH is used for computing risk level of accident (a safety level) for preventing and reducing injury or accidents. The SSH system is an inexpensive, non-intrusive, non-invasive, and non-vision-based system, which consists of 9DOF Inertial Measurement Unit (IMU) and dry EEG electrodes. A haptic device, such as vibrotactile motor, is integrated to the helmet in order to alert the operator when computed risk level (fatigue, high stress or error) reach a threshold. Once the risk level of accident breaks the threshold, a signal will be sent wirelessly to stop the relevant machine tool or process

    Use of ecological gestures in soccer games running on mobile devices

    Get PDF
    The strong integration of “intelligent mobile devices” into modern societies offers a great potential for a wide spread distribution of mobile serious games. As in the case of Virtual Reality based systems, in order to be useful and efficient, these serious games need to be validated ecologically. In this context, this paper addresses the use of ecological interactions for a mobile serious game. We exploit a wearable insole in order to let users interact with a virtual soccer game via real-world soccer movements. We analyzed the concept of ecological interactions. The system used for recognition of ecological gestures is also detailed. A primary study showed that proposed system can be exploited for real time gesture recognition on a mobile device

    Toward an augmented shoe for preventing falls related to physical conditions of the soil

    Get PDF
    It is known that the physical conditions of an environment might represent an important risk of falling. In this paper, we report an ongoing project toward the creation of intelligent clothes aiming at preventing falls related to such conditions. The package described here is centered on an intelligent shoe. The developed prototype counts two main parts: hardware and software. The material is composed of a set of sensors and actuators, distributed in strategic positions of the shoe, while the software is a soft real-time system running on a Smartphone. Our prototype has been served for the differentiation of physical properties of soils (concrete, broken stone, sand and dust stone)

    Detection and prediction of falls among elderly people using walkers

    Get PDF
    Falls of elderly people are big health burden, especially for long-term consequence. Yet we already have research, describing how exactly elderly fall and reasons of falls. We aimed to develop means that could not only detect falls and send alerts to relatives and doctors to conquer one of the biggest fears of elderly to fall and do not have the ability to call for help, but also tried to implement fall prevention system. This system based on “relatively safe walking patterns” that our system tries to detect during the walk. During the work we used SensorTag 2.0 CC2650 sensors, iPhone and Apple Watch to collect motion data (Gyroscope, Accelerometer and Magnetometer) and compared the accuracy of each device. As we chosen iPhone and Apple Watch to use Core ML framework to integrate the neural network model we generated using Keras into prototype app. The iPhone app perfectly detects falls, but it needs to collect data more accurately, to improve the machine learning model to improve the work of prediction falls. The Apple Watch app does not work acceptable, despite well prepared Keras model and requires revision

    Method to determine physical properties of the ground

    Get PDF
    The method can determine physical properties of the ground stepped upon by a user wearing a footwear incorporating an accelerometer, and includes: receiving a raw signal from the accelerometer during at least one step being taken by the user on the ground; identifying, in the received raw signal, at least one characteristic signature; associating the at least one characteristic signature to physical properties of the ground; and generating a signal indicating the physical properties based on said association. The generated signal can further be used to advise a user of a risk of falling based on at least the physical properties of the ground

    Analysis of Android Device-Based Solutions for Fall Detection

    Get PDF
    Falls are a major cause of health and psychological problems as well as hospitalization costs among older adults. Thus, the investigation on automatic Fall Detection Systems (FDSs) has received special attention from the research community during the last decade. In this area, the widespread popularity, decreasing price, computing capabilities, built-in sensors and multiplicity of wireless interfaces of Android-based devices (especially smartphones) have fostered the adoption of this technology to deploy wearable and inexpensive architectures for fall detection. This paper presents a critical and thorough analysis of those existing fall detection systems that are based on Android devices. The review systematically classifies and compares the proposals of the literature taking into account different criteria such as the system architecture, the employed sensors, the detection algorithm or the response in case of a fall alarms. The study emphasizes the analysis of the evaluation methods that are employed to assess the effectiveness of the detection process. The review reveals the complete lack of a reference framework to validate and compare the proposals. In addition, the study also shows that most research works do not evaluate the actual applicability of the Android devices (with limited battery and computing resources) to fall detection solutions.Ministerio de EconomĂ­a y Competitividad TEC2013-42711-

    Embodied interaction with visualization and spatial navigation in time-sensitive scenarios

    Get PDF
    Paraphrasing the theory of embodied cognition, all aspects of our cognition are determined primarily by the contextual information and the means of physical interaction with data and information. In hybrid human-machine systems involving complex decision making, continuously maintaining a high level of attention while employing a deep understanding concerning the task performed as well as its context are essential. Utilizing embodied interaction to interact with machines has the potential to promote thinking and learning according to the theory of embodied cognition proposed by Lakoff. Additionally, the hybrid human-machine system utilizing natural and intuitive communication channels (e.g., gestures, speech, and body stances) should afford an array of cognitive benefits outstripping the more static forms of interaction (e.g., computer keyboard). This research proposes such a computational framework based on a Bayesian approach; this framework infers operator\u27s focus of attention based on the physical expressions of the operators. Specifically, this work aims to assess the effect of embodied interaction on attention during the solution of complex, time-sensitive, spatial navigational problems. Toward the goal of assessing the level of operator\u27s attention, we present a method linking the operator\u27s interaction utility, inference, and reasoning. The level of attention was inferred through networks coined Bayesian Attentional Networks (BANs). BANs are structures describing cause-effect relationships between operator\u27s attention, physical actions and decision-making. The proposed framework also generated a representative BAN, called the Consensus (Majority) Model (CMM); the CMM consists of an iteratively derived and agreed graph among candidate BANs obtained by experts and by the automatic learning process. Finally, the best combinations of interaction modalities and feedback were determined by the use of particular utility functions. This methodology was applied to a spatial navigational scenario; wherein, the operators interacted with dynamic images through a series of decision making processes. Real-world experiments were conducted to assess the framework\u27s ability to infer the operator\u27s levels of attention. Users were instructed to complete a series of spatial-navigational tasks using an assigned pairing of an interaction modality out of five categories (vision-based gesture, glove-based gesture, speech, feet, or body balance) and a feedback modality out of two (visual-based or auditory-based). Experimental results have confirmed that physical expressions are a determining factor in the quality of the solutions in a spatial navigational problem. Moreover, it was found that the combination of foot gestures with visual feedback resulted in the best task performance (p\u3c .001). Results have also shown that embodied interaction-based multimodal interface decreased execution errors that occurred in the cyber-physical scenarios (p \u3c .001). Therefore we conclude that appropriate use of interaction and feedback modalities allows the operators maintain their focus of attention, reduce errors, and enhance task performance in solving the decision making problems

    Machine Learning-based Detection of Compensatory Balance Responses and Environmental Fall Risks Using Wearable Sensors

    Get PDF
    Falls are the leading cause of fatal and non-fatal injuries among seniors worldwide, with serious and costly consequences. Compensatory balance responses (CBRs) are reactions to recover stability following a loss of balance, potentially resulting in a fall if sufficient recovery mechanisms are not activated. While performance of CBRs are demonstrated risk factors for falls in seniors, the frequency, type, and underlying cause of these incidents occurring in everyday life have not been well investigated. This study was spawned from the lack of research on development of fall risk assessment methods that can be used for continuous and long-term mobility monitoring of the geri- atric population, during activities of daily living, and in their dwellings. Wearable sensor systems (WSS) offer a promising approach for continuous real-time detection of gait and balance behavior to assess the risk of falling during activities of daily living. To detect CBRs, we record movement signals (e.g. acceleration) and activity patterns of four muscles involving in maintaining balance using wearable inertial measurement units (IMUs) and surface electromyography (sEMG) sensors. To develop more robust detection methods, we investigate machine learning approaches (e.g., support vector machines, neural networks) and successfully detect lateral CBRs, during normal gait with accuracies of 92.4% and 98.1% using sEMG and IMU signals, respectively. Moreover, to detect environmental fall-related hazards that are associated with CBRs, and affect balance control behavior of seniors, we employ an egocentric mobile vision system mounted on participants chest. Two algorithms (e.g. Gabor Barcodes and Convolutional Neural Networks) are developed. Our vision-based method detects 17 different classes of environmental risk factors (e.g., stairs, ramps, curbs) with 88.5% accuracy. To the best of the authors knowledge, this study is the first to develop and evaluate an automated vision-based method for fall hazard detection

    Étude d’un système interactif sécuritaire dédié à l’interaction humain-robot appliqué à des mécanismes parallèles entraînés par des câbles

    Get PDF
    Depuis l'introduction des premiers robots interactifs en industrie, qui étaient à la base des systèmes collaboratifs supposés assister les humains dans les tâches pénibles et éprouvantes physiquement, le domaine de l’interaction humain-robot a fait des progrès considérables. Actuellement, les robots et les humains peuvent coexister conjointement dans un espace hybride afin de partager des tâches de production ou de partager le temps dans l’exécution d’une activité. Cependant, les nouveaux besoins industriels doivent conduire à des recherches pour adapter les chaînes de production et les rendre plus flexible et réactive à la modification des caractéristiques des produits. L’une des solutions consiste à adapter le manipulateur industriel présent dans les lignes de production à des fins d’interaction et de collaboration. Toutefois, la présence de l’humain dans l’espace de travail d’un manipulateur (cellule de travail hybride) représente un réel défi dans le domaine de l’interaction humain-robot puisque cela consiste à l’intégration d’une multitude de variétés de capteurs dits intelligents, surtout dans le cas de l’utilisation d’un mécanisme parallèle entraîné par des câbles. Pour cette raison, plusieurs problématiques ont été soulevées, pour lesquelles peu ou pas de recherches sont réalisées : cette nouvelle technologie est introduite sans entraînement de l’opérateur, l’évaluation de la sécurité a été très peu explorée lors de l’interaction et la performance de son utilisation demeure peu évaluée dans un contexte de réduction des troubles musculosquelettiques (TMS). Le projet de recherche vise l’étude et la conception d’un système interactif permettant d’améliorer la sécurité et l’intuitivité des personnes qui interagissent avec des mécanismes parallèles entraînés par des câbles. Deux modes d’interaction sont étudiés dans le système interactif, à savoir le partage des activités et l’interaction physique. En premier lieu, une méthode de génération de trajectoires avec évitement de collisions appliquée pour le mode de partage des activités est proposée. L’effecteur du manipulateur suit un chemin dans l’espace opérationnel à travers des points de passage. Ces derniers sont générés par un réseau de neurones rétropropagation (Back-propagation), et sont reliés par un polynôme quintique (de degré cinq). En outre, la géométrie déformable de l’obstacle et l’environnement dynamique sont pris en compte dans la méthode. En second lieu, une approche est abordée pour déterminer la distance minimale entre les câbles et identifier ceux qui sont en interférence. Le calcul de distance est exécuté en temps réel à travers un algorithme. En outre, les contraintes physiques des câbles ont été prises en compte dans la modélisation mathématique et formulées en un problème d’optimisation non linéaire. Ce dernier est résolu en utilisant l’approche de Karush-Kuhn-Tucker (KKT). Cette méthode de calcul de distance est intégrée à une loi de commande interactive permettant de gérer les câbles en interférence pendant l’interaction physique avec le mécanisme. Une force est calculée et introduite dans la boucle de la commande afin d’éviter le croisement et le relâchement des câbles en interférence. Par ce fait, la tâche est exécutée jusqu’aux limites des possibilités géométriques et cinématiques du mécanisme. Par ailleurs, cette stratégie est basée sur une commande en admittance pour permettre l’interaction physiquement avec un mécanisme parallèle entrainé par des câbles. Un algorithme permettant de sélectionner entre ces deux modes est proposé. Cette approche inclut un vêtement intelligent pour le changement de mode de manière intuitive simple et rapide. L’algorithme est exécuté en temps réel et basé sur une identification de gestes utilisant un polynôme d’interpolation trigonométrique. Les signaux analysés proviennent d’une semelle instrumentée qui est située au niveau du pied. Enfin, les différents algorithmes et stratégies sont validés en simulations et à travers des expérimentations sur un mécanisme parallèle entrainé par sept câbles. Ce projet de thèse apporte plusieurs contributions dans le domaine de l’interaction humain-robot notamment la capacité d’adaptation du système interactif pour des tâches industrielles. Since the introduction of the first interactive robots in industry, which was the collaborative robots (labelled as COBOT), the field of human robot interaction has made considerable progress. In its early version, those robots were used to increase muscle strength of the operator for moving heavy loads. Recently, robots and humans can share the same workspace, production activities or working time. However, new needs in industry require more flexibility and reactivity supporting fast changes in product characteristics. One solution consists in the adaptation of an industrial robot, that is already installed in the production line, for interaction and collaboration purposes such as kinetic learning assembly task, and adaptive third hand. However, the presence of the human in the manipulators’ workspace (hybrid work cell) represents a real challenge in the field of human-robot interaction. It consists in the integration of an intelligent sensor varieties, especially when the cables driven parallel mechanisms (CDPM) are used for an interaction task. For these reasons, several issues have been raised, for which few or no research has been done yet. This new technology is introduced without any operators training and the safety assessment has been very little explored during the interaction. Moreover, the performance of its use remains poorly evaluated in a context of reduction of musculoskeletal disorders (MSDs). The research project aims to study and design an interactive system in order to improve the safety and the intuitivity when the humans interact with cables driven parallel mechanisms. Two modes of cooperation are studied in the interactive system, namely the sharing of activities and the physical interaction. First, a trajectory generating method for an industrial manipulator in a shared workspace is proposed. A neural network using a supervised learning is applied to create the waypoints required for dynamic obstacles avoidance. These points are linked with a quintic polynomial function for smooth motion which is optimized using least-square to compute an optimal trajectory. Moreover, the evaluation of human motion forms has been taken into consideration in the proposed strategy. Second, a mathematical approach is presented to determine the minimum distance between cables and to identify which ones are interfering. To execute this approach in real time, an algorithm is also presented for calculating this distance. Furthermore, the physical constraints of the cables have been considered in mathematical modeling and formulated into a nonlinear optimization problem. The latter is solved using the Karush-Kuhn-Tucker (KKT) approach. This method of distance calculation is integrated with a new interactive control that eliminates the computation of the effect of a folding interfered cable. A control strategy is proposed, which allows to manage the cables in interference while the operator physically interacts with the mechanism. A repulsive force is generated and introduced to the controller to avoid the cables crossing and folding. Therefore, the task is executed within the limits of the kinematic possibilities. Moreover, this strategy is based on an admittance control for physically interacting with a CDPM. In order to allow a change of intuitive interaction mode, an algorithm for selecting between these two modes is proposed. This approach includes an instrumented insole placed into a shoe for intuitive mode change quick and easy. The algorithm is executed in real time and based on a gesture identification using a trigonometric interpolation polynomial. Finally, theses different strategies and algorithms are validated in simulations and through experiments on a parallel mechanism driven by seven cables. This thesis project brings several contributions in the field of human-robot interaction including the ability of the interactive system to adapt for industrial tasks
    corecore