487 research outputs found

    Movement primitives as a robotic tool to interpret trajectories through learning-by-doing

    Get PDF
    Articulated movements are fundamental in many human and robotic tasks. While humans can learn and generalise arbitrarily long sequences of movements, and particularly can optimise them to fit the constraints and features of their body, robots are often programmed to execute point-to-point precise but fixed patterns. This study proposes a new approach to interpreting and reproducing articulated and complex trajectories as a set of known robot-based primitives. Instead of achieving accurate reproductions, the proposed approach aims at interpreting data in an agent-centred fashion, according to an agent's primitive movements. The method improves the accuracy of a reproduction with an incremental process that seeks first a rough approximation by capturing the most essential features of a demonstrated trajectory. Observing the discrepancy between the demonstrated and reproduced trajectories, the process then proceeds with incremental decompositions and new searches in sub-optimal parts of the trajectory. The aim is to achieve an agent-centred interpretation and progressive learning that fits in the first place the robots' capability, as opposed to a data-centred decomposition analysis. Tests on both geometric and human generated trajectories reveal that the use of own primitives results in remarkable robustness and generalisation properties of the method. In particular, because trajectories are understood and abstracted by means of agent-optimised primitives, the method has two main features: 1) Reproduced trajectories are general and represent an abstraction of the data. 2) The algorithm is capable of reconstructing highly noisy or corrupted data without pre-processing thanks to an implicit and emergent noise suppression and feature detection. This study suggests a novel bio-inspired approach to interpreting, learning and reproducing articulated movements and trajectories. Possible applications include drawing, writing, movement generation, object manipulation, and other tasks where the performance requires human-like interpretation and generalisation capabilities

    Bootstrapping movement primitives from complex trajectories

    Get PDF
    Lemme A. Bootstrapping movement primitives from complex trajectories. Bielefeld: Bielefeld University; 2014

    The Meaning of Action:a review on action recognition and mapping

    Get PDF
    In this paper, we analyze the different approaches taken to date within the computer vision, robotics and artificial intelligence communities for the representation, recognition, synthesis and understanding of action. We deal with action at different levels of complexity and provide the reader with the necessary related literature references. We put the literature references further into context and outline a possible interpretation of action by taking into account the different aspects of action recognition, action synthesis and task-level planning

    Human skill capturing and modelling using wearable devices

    Get PDF
    Industrial robots are delivering more and more manipulation services in manufacturing. However, when the task is complex, it is difficult to programme a robot to fulfil all the requirements because even a relatively simple task such as a peg-in-hole insertion contains many uncertainties, e.g. clearance, initial grasping position and insertion path. Humans, on the other hand, can deal with these variations using their vision and haptic feedback. Although humans can adapt to uncertainties easily, most of the time, the skilled based performances that relate to their tacit knowledge cannot be easily articulated. Even though the automation solution may not fully imitate human motion since some of them are not necessary, it would be useful if the skill based performance from a human could be firstly interpreted and modelled, which will then allow it to be transferred to the robot. This thesis aims to reduce robot programming efforts significantly by developing a methodology to capture, model and transfer the manual manufacturing skills from a human demonstrator to the robot. Recently, Learning from Demonstration (LfD) is gaining interest as a framework to transfer skills from human teacher to robot using probability encoding approaches to model observations and state transition uncertainties. In close or actual contact manipulation tasks, it is difficult to reliabley record the state-action examples without interfering with the human senses and activities. Therefore, wearable sensors are investigated as a promising device to record the state-action examples without restricting the human experts during the skilled execution of their tasks. Firstly to track human motions accurately and reliably in a defined 3-dimensional workspace, a hybrid system of Vicon and IMUs is proposed to compensate for the known limitations of the individual system. The data fusion method was able to overcome occlusion and frame flipping problems in the two camera Vicon setup and the drifting problem associated with the IMUs. The results indicated that occlusion and frame flipping problems associated with Vicon can be mitigated by using the IMU measurements. Furthermore, the proposed method improves the Mean Square Error (MSE) tracking accuracy range from 0.8˚ to 6.4˚ compared with the IMU only method. Secondly, to record haptic feedback from a teacher without physically obstructing their interactions with the workpiece, wearable surface electromyography (sEMG) armbands were used as an indirect method to indicate contact feedback during manual manipulations. A muscle-force model using a Time Delayed Neural Network (TDNN) was built to map the sEMG signals to the known contact force. The results indicated that the model was capable of estimating the force from the sEMG armbands in the applications of interest, namely in peg-in-hole and beater winding tasks, with MSE of 2.75N and 0.18N respectively. Finally, given the force estimation and the motion trajectories, a Hidden Markov Model (HMM) based approach was utilised as a state recognition method to encode and generalise the spatial and temporal information of the skilled executions. This method would allow a more representative control policy to be derived. A modified Gaussian Mixture Regression (GMR) method was then applied to enable motions reproduction by using the learned state-action policy. To simplify the validation procedure, instead of using the robot, additional demonstrations from the teacher were used to verify the reproduction performance of the policy, by assuming human teacher and robot learner are physical identical systems. The results confirmed the generalisation capability of the HMM model across a number of demonstrations from different subjects; and the reproduced motions from GMR were acceptable in these additional tests. The proposed methodology provides a framework for producing a state-action model from skilled demonstrations that can be translated into robot kinematics and joint states for the robot to execute. The implication to industry is reduced efforts and time in programming the robots for applications where human skilled performances are required to cope robustly with various uncertainties during tasks execution

    Cognitive Principles of Schematisation for Wayfinding Assistance

    Get PDF
    People often need assistance to successfully perform wayfinding tasks in unfamiliar environments. Nowadays, a huge variety of wayfinding assistance systems exists. All these systems intend to present the needed information for a certain wayfinding situation in an adequate presentation. Some wayfinding assistance systems utilize findings for the field of cognitive sciences to develop and design cognitive ergonomic approaches. These approaches aim to be systems with which the users can effortless interact with and which present needed information in a way the user can acquire the information naturally. Therefore it is necessary to determinate the information needs of the user in a certain wayfinding task and to investigate how this information is processed and conceptualised by the wayfinder to be able to present it adequately. Cognitive motivated schematic maps are an example which employ this knowledge and emphasise relevant information and present it in an easily readable way. In my thesis I present a transfer approach to reuse the knowledge of well-grounded knowledge of schematisation techniques from one externalisation such as maps to another externalization such as virtual environments. A analysis of the informational need of the specific wayfinding task route following is done one the hand of a functional decomposition as well as a deep analysis of representation-theoretic consideration of the external representations maps and virtual environments. Concluding from these results, guidelines for transferring schematisation principles between different representation types are proposed. Specifically, this thesis chose the exemplary transfer of the schematisation technique wayfinding choremes from a map presentation into a virtual environment to present the theoretic requirements for a successful transfer. Wayfinding choremes are abstract mental concepts of turning action which are accessible as graphical externalisation integrated into route maps. These wayfinding choremes maps emphasis the turning action along the route by displaying the angular information as prototypes of 45° or 90°. This schematisation technique enhances wayfinding performance by supporting the matching processes between the map representation and the internal mental representation of the user. I embed the concept of wayfinding choremes into a virtual environment and present a study to test if the transferred schematisation technique also enhance the wayfinding performance. The empirical investigations present a successful transfer of the concept of the wayfinding choremes. Depending on the complexity of the route the embedded schematization enhance the wayfinding performance of participants who try to follow a route from memory. Participants who trained and recall the route in a schematised virtual environment make fewer errors than the participants of the unmodified virtual world. This thesis sets an example of the close research circle of cognitive behavioural studies to representation-theoretical considerations to applications of wayfinding assistance and their evaluations back to new conclusions in cognitive science. It contributes an interdisciplinary comprehensive inspection of the interplay of environmental factors and mental processes on the example of angular information and mental distortion of this information

    A survey of visual preprocessing and shape representation techniques

    Get PDF
    Many recent theories and methods proposed for visual preprocessing and shape representation are summarized. The survey brings together research from the fields of biology, psychology, computer science, electrical engineering, and most recently, neural networks. It was motivated by the need to preprocess images for a sparse distributed memory (SDM), but the techniques presented may also prove useful for applying other associative memories to visual pattern recognition. The material of this survey is divided into three sections: an overview of biological visual processing; methods of preprocessing (extracting parts of shape, texture, motion, and depth); and shape representation and recognition (form invariance, primitives and structural descriptions, and theories of attention)

    Semantic Scene Understanding for Prediction of Action Effects in Humanoid Robot Manipulation Tasks

    Get PDF

    Unsupervised learning of vocal tract sensory-motor synergies

    Get PDF
    The degrees of freedom problem is ubiquitous within motor control arising out of the redundancy inherent in motor systems and raises the question of how control actions are determined when there exist infinitely many ways to perform a task. Speech production is a complex motor control task and suffers from this problem, but it has not drawn the research attention that reaching movements or walking gaits have. Motivated by the use of dimensionality reduction algorithms in learning muscle synergies and perceptual primitives that reflect the structure in biological systems, an approach to learning sensory-motor synergies via dynamic factor analysis for control of a simulated vocal tract is presented here. This framework is shown to mirror the articulatory phonology model of speech production and evidence is provided that articulatory gestures arise from learning an optimal encoding of vocal tract dynamics. Broad phonetic categories are discovered within the low-dimensional factor space indicating that sensory-motor synergies will enable application of reinforcement learning to the problem of speech acquisition
    corecore