2,308 research outputs found

    Automatic Recognition of Concurrent and Coupled Human Motion Sequences

    Get PDF
    We developed methods and algorithms for all parts of a motion recognition system, i. e. Feature Extraction, Motion Segmentation and Labeling, Motion Primitive and Context Modeling as well as Decoding. We collected several datasets to compare our proposed methods with the state-of-the-art in human motion recognition. The main contributions of this thesis are a structured functional motion decomposition and a flexible and scalable motion recognition system suitable for a Humanoid Robot

    Symbol Emergence in Robotics: A Survey

    Full text link
    Humans can learn the use of language through physical interaction with their environment and semiotic communication with other people. It is very important to obtain a computational understanding of how humans can form a symbol system and obtain semiotic skills through their autonomous mental development. Recently, many studies have been conducted on the construction of robotic systems and machine-learning methods that can learn the use of language through embodied multimodal interaction with their environment and other systems. Understanding human social interactions and developing a robot that can smoothly communicate with human users in the long term, requires an understanding of the dynamics of symbol systems and is crucially important. The embodied cognition and social interaction of participants gradually change a symbol system in a constructive manner. In this paper, we introduce a field of research called symbol emergence in robotics (SER). SER is a constructive approach towards an emergent symbol system. The emergent symbol system is socially self-organized through both semiotic communications and physical interactions with autonomous cognitive developmental agents, i.e., humans and developmental robots. Specifically, we describe some state-of-art research topics concerning SER, e.g., multimodal categorization, word discovery, and a double articulation analysis, that enable a robot to obtain words and their embodied meanings from raw sensory--motor information, including visual information, haptic information, auditory information, and acoustic speech signals, in a totally unsupervised manner. Finally, we suggest future directions of research in SER.Comment: submitted to Advanced Robotic

    A probabilistic approach to learn activities of daily living of a mobility aid device user

    Full text link
    © 2014 IEEE. The problem of inferring human behaviour is naturally complex: people interact with the environment and each other in many different ways, and dealing with the often incomplete and uncertain sensed data by which the actions are perceived only compounds the difficulty of the problem. In this paper, we propose a framework whereby these elaborate behaviours can be naturally simplified by decomposing them into smaller activities, whose temporal dependencies can be more efficiently represented via probabilistic hierarchical learning models. In this regard, patterns of a number of activities typically carried out by users of an ambulatory aid device have been identified with the aid of a Hierarchical Hidden Markov Model (HHMM) framework. By decomposing the complex behaviours into multiple layers of abstraction the approach is shown capable of modelling and learning these tightly coupled human-machine interactions. The inference accuracy of the proposed model is proven to compare favourably against more traditional discriminative models, as well as other compatible generative strategies to provide a complete picture that highlights the benefits of the proposed approach, and opens the door to more intelligent assistance with a robotic mobility aid

    Segmentation and Recognition of Meeting Events using a Two-Layered HMM and a Combined MLP-HMM Approach

    Full text link

    Probabilistic identification of sit-to-stand and stand-to-sit with a wearable sensor

    Get PDF
    Identification of human movements is crucial for the design of intelligent devices capable to provide assistance. In this work, a Bayesian formulation, together with a sequential analysis method, is presented for identification of sit-to-stand (SiSt) and stand-to-sit (StSi) activities. This method performs autonomous iterative accumulation of sensor measurements and decision-making processes, while dealing with noise and uncertainty present in sensors. First, the Bayesian formulation is able to identify sit, transition and stand activity states. Second, the transition state, divided into transition phases, is used to identify the state of the human body during SiSt and StSi. These processes employ acceleration signals from an inertial measurement unit attached to the thigh of participants. Validation of our method with experiments in offline, real-time and a simulated environment, shows its capability to identify the human body during SiSt and StSi with an accuracy of 100% and mean response time of 50 ms (5 sensor measurements). In the simulated environment, our approach shows its potential to interact with low-level methods required for robot control. Overall, this work offers a robust framework for intelligent and autonomous systems, capable to recognise the human intent to rise from and sit on a chair, which is essential to provide accurate and fast assistance

    TOWARD INTELLIGENT WELDING BY BUILDING ITS DIGITAL TWIN

    Get PDF
    To meet the increasing requirements for production on individualization, efficiency and quality, traditional manufacturing processes are evolving to smart manufacturing with the support from the information technology advancements including cyber-physical systems (CPS), Internet of Things (IoT), big industrial data, and artificial intelligence (AI). The pre-requirement for integrating with these advanced information technologies is to digitalize manufacturing processes such that they can be analyzed, controlled, and interacted with other digitalized components. Digital twin is developed as a general framework to do that by building the digital replicas for the physical entities. This work takes welding manufacturing as the case study to accelerate its transition to intelligent welding by building its digital twin and contributes to digital twin in the following two aspects (1) increasing the information analysis and reasoning ability by integrating deep learning; (2) enhancing the human user operative ability to physical welding manufacturing via digital twins by integrating human-robot interaction (HRI). Firstly, a digital twin of pulsed gas tungsten arc welding (GTAW-P) is developed by integrating deep learning to offer the strong feature extraction and analysis ability. In such a system, the direct information including weld pool images, arc images, welding current and arc voltage is collected by cameras and arc sensors. The undirect information determining the welding quality, i.e., weld joint top-side bead width (TSBW) and back-side bead width (BSBW), is computed by a traditional image processing method and a deep convolutional neural network (CNN) respectively. Based on that, the weld joint geometrical size is controlled to meet the quality requirement in various welding conditions. In the meantime, this developed digital twin is visualized to offer a graphical user interface (GUI) to human users for their effective and intuitive perception to physical welding processes. Secondly, in order to enhance the human operative ability to the physical welding processes via digital twins, HRI is integrated taking virtual reality (VR) as the interface which could transmit the information bidirectionally i.e., transmitting the human commends to welding robots and visualizing the digital twin to human users. Six welders, skilled and unskilled, tested this system by completing the same welding job but demonstrate different patterns and resulted welding qualities. To differentiate their skill levels (skilled or unskilled) from their demonstrated operations, a data-driven approach, FFT-PCA-SVM as a combination of fast Fourier transform (FFT), principal component analysis (PCA), and support vector machine (SVM) is developed and demonstrates the 94.44% classification accuracy. The robots can also work as an assistant to help the human welders to complete the welding tasks by recognizing and executing the intended welding operations. This is done by a developed human intention recognition algorithm based on hidden Markov model (HMM) and the welding experiments show that developed robot-assisted welding can help to improve welding quality. To further take the advantages of the robots i.e., movement accuracy and stability, the role of the robot upgrades to be a collaborator from an assistant to complete a subtask independently i.e., torch weaving and automatic seam tracking in weaving GTAW. The other subtask i.e., welding torch moving along the weld seam is completed by the human users who can adjust the travel speed to control the heat input and ensure the good welding quality. By doing that, the advantages of humans (intelligence) and robots (accuracy and stability) are combined together under this human-robot collaboration framework. The developed digital twin for welding manufacturing helps to promote the next-generation intelligent welding and can be applied in other similar manufacturing processes easily after small modifications including painting, spraying and additive manufacturing

    Gesteme-free context-aware adaptation of robot behavior in human–robot cooperation

    Get PDF
    Background: Cooperative robotics is receiving greater acceptance because the typical advantages provided by manipulators are combined with an intuitive usage. In particular, hands-on robotics may benefit from the adaptation of the assistant behavior with respect to the activity currently performed by the user. A fast and reliable classification of human activities is required, as well as strategies to smoothly modify the control of the manipulator. In this scenario, gesteme-based motion classification is inadequate because it needs the observation of a wide signal percentage and the definition of a rich vocabulary. Objective: In this work, a system able to recognize the user's current activity without a vocabulary of gestemes, and to accordingly adapt the manipulator's dynamic behavior is presented. Methods and material: An underlying stochastic model fits variations in the user's guidance forces and the resulting trajectories of the manipulator's end-effector with a set of Gaussian distribution. The high-level switching between these distributions is captured with hidden Markov models. The dynamic of the KUKA light-weight robot, a torque-controlled manipulator, is modified with respect to the classified activity using sigmoidal-shaped functions. The presented system is validated over a pool of 12 naive users in a scenario that addresses surgical targeting tasks on soft tissue. The robot's assistance is adapted in order to obtain a stiff behavior during activities that require critical accuracy constraint, and higher compliance during wide movements. Both the ability to provide the correct classification at each moment (sample accuracy) and the capability of correctly identify the correct sequence of activity (sequence accuracy) were evaluated. Results: The proposed classifier is fast and accurate in all the experiments conducted (80% sample accuracy after the observation of similar to 450 ms of signal). Moreover, the ability of recognize the correct sequence of activities, without unwanted transitions is guaranteed (sequence accuracy similar to 90% when computed far away from user desired transitions). Finally, the proposed activity-based adaptation of the robot's dynamic does not lead to a not smooth behavior (high smoothness, i.e. normalized jerk score <0.01). Conclusion: The provided system is able to dynamic assist the operator during cooperation in the presented scenario
    • …
    corecore