987 research outputs found

    Computational neural learning formalisms for manipulator inverse kinematics

    Get PDF
    An efficient, adaptive neural learning paradigm for addressing the inverse kinematics of redundant manipulators is presented. The proposed methodology exploits the infinite local stability of terminal attractors - a new class of mathematical constructs which provide unique information processing capabilities to artificial neural systems. For robotic applications, synaptic elements of such networks can rapidly acquire the kinematic invariances embedded within the presented samples. Subsequently, joint-space configurations, required to follow arbitrary end-effector trajectories, can readily be computed. In a significant departure from prior neuromorphic learning algorithms, this methodology provides mechanisms for incorporating an in-training skew to handle kinematics and environmental constraints

    Development of a sensor coordinated kinematic model for neural network controller training

    Get PDF
    A robotic benchmark problem useful for evaluating alternative neural network controllers is presented. Specifically, it derives two camera models and the kinematic equations of a multiple degree of freedom manipulator whose end effector is under observation. The mapping developed include forward and inverse translations from binocular images to 3-D target position and the inverse kinematics of mapping point positions into manipulator commands in joint space. Implementation is detailed for a three degree of freedom manipulator with one revolute joint at the base and two prismatic joints on the arms. The example is restricted to operate within a unit cube with arm links of 0.6 and 0.4 units respectively. The development is presented in the context of more complex simulations and a logical path for extension of the benchmark to higher degree of freedom manipulators is presented

    Novel Artificial Neural Network Application for Prediction of Inverse Kinematics of Robot Manipulator

    Get PDF
    The robot control problem can be divided into two main areas, kinematics control (the coordination of the links of kinematics chain to produce desire motion of the robot), and dynamic control (driving the actuator of the mechanism to follow the commanded position velocities). In general the control strategies used in robot involves position coordination in Cartesian space by direct or indirect kinematics method. Inverse kinematics comprises the computation need to find the join angles for a given Cartesian position and orientation of the end effectors. This computation is fundamental to control of robot arms but it is very difficult to calculate an inverse kinematics solution of robot manipulator. For this solution most industrial robot arms are designed by using a non-linear algebraic computation to finding the inverse kinematics solution. From the literature it is well described that there is no unique solution for the inverse kinematics. That is why it is significant to apply an artificial neural network models. Here structured artificial neural network (ANN) models an approach has been proposed to control the motion of robot manipulator. In these work two types of ANN models were used. The first kind ANN model is MLP (multi-layer perceptrons) which was famous as back propagation neural network model. In this network gradient descent type of learning rules are applied. The second kind of ANN model is PPN (polynomial poly-processor neural network) where polynomial equation was used. Here, work has been undertaken to find the best ANN configuration for the problem. It was found that between MLP and PPN, MLP gives better result as compared to PPN by considering average percentage error, as the performance index

    Learning Task Priorities from Demonstrations

    Full text link
    Bimanual operations in humanoids offer the possibility to carry out more than one manipulation task at the same time, which in turn introduces the problem of task prioritization. We address this problem from a learning from demonstration perspective, by extending the Task-Parameterized Gaussian Mixture Model (TP-GMM) to Jacobian and null space structures. The proposed approach is tested on bimanual skills but can be applied in any scenario where the prioritization between potentially conflicting tasks needs to be learned. We evaluate the proposed framework in: two different tasks with humanoids requiring the learning of priorities and a loco-manipulation scenario, showing that the approach can be exploited to learn the prioritization of multiple tasks in parallel.Comment: Accepted for publication at the IEEE Transactions on Robotic

    Model Based Control of Soft Robots: A Survey of the State of the Art and Open Challenges

    Full text link
    Continuum soft robots are mechanical systems entirely made of continuously deformable elements. This design solution aims to bring robots closer to invertebrate animals and soft appendices of vertebrate animals (e.g., an elephant's trunk, a monkey's tail). This work aims to introduce the control theorist perspective to this novel development in robotics. We aim to remove the barriers to entry into this field by presenting existing results and future challenges using a unified language and within a coherent framework. Indeed, the main difficulty in entering this field is the wide variability of terminology and scientific backgrounds, making it quite hard to acquire a comprehensive view on the topic. Another limiting factor is that it is not obvious where to draw a clear line between the limitations imposed by the technology not being mature yet and the challenges intrinsic to this class of robots. In this work, we argue that the intrinsic effects are the continuum or multi-body dynamics, the presence of a non-negligible elastic potential field, and the variability in sensing and actuation strategies.Comment: 69 pages, 13 figure

    Industrial Robotics

    Get PDF
    This book covers a wide range of topics relating to advanced industrial robotics, sensors and automation technologies. Although being highly technical and complex in nature, the papers presented in this book represent some of the latest cutting edge technologies and advancements in industrial robotics technology. This book covers topics such as networking, properties of manipulators, forward and inverse robot arm kinematics, motion path-planning, machine vision and many other practical topics too numerous to list here. The authors and editor of this book wish to inspire people, especially young ones, to get involved with robotic and mechatronic engineering technology and to develop new and exciting practical applications, perhaps using the ideas and concepts presented herein

    Proceedings of the NASA Conference on Space Telerobotics, volume 1

    Get PDF
    The theme of the Conference was man-machine collaboration in space. Topics addressed include: redundant manipulators; man-machine systems; telerobot architecture; remote sensing and planning; navigation; neural networks; fundamental AI research; and reasoning under uncertainty

    A Self-Organizing Neural Model of Motor Equivalent Reaching and Tool Use by a Multijoint Arm

    Full text link
    This paper describes a self-organizing neural model for eye-hand coordination. Called the DIRECT model, it embodies a solution of the classical motor equivalence problem. Motor equivalence computations allow humans and other animals to flexibly employ an arm with more degrees of freedom than the space in which it moves to carry out spatially defined tasks under conditions that may require novel joint configurations. During a motor babbling phase, the model endogenously generates movement commands that activate the correlated visual, spatial, and motor information that are used to learn its internal coordinate transformations. After learning occurs, the model is capable of controlling reaching movements of the arm to prescribed spatial targets using many different combinations of joints. When allowed visual feedback, the model can automatically perform, without additional learning, reaches with tools of variable lengths, with clamped joints, with distortions of visual input by a prism, and with unexpected perturbations. These compensatory computations occur within a single accurate reaching movement. No corrective movements are needed. Blind reaches using internal feedback have also been simulated. The model achieves its competence by transforming visual information about target position and end effector position in 3-D space into a body-centered spatial representation of the direction in 3-D space that the end effector must move to contact the target. The spatial direction vector is adaptively transformed into a motor direction vector, which represents the joint rotations that move the end effector in the desired spatial direction from the present arm configuration. Properties of the model are compared with psychophysical data on human reaching movements, neurophysiological data on the tuning curves of neurons in the monkey motor cortex, and alternative models of movement control.National Science Foundation (IRI 90-24877); Office of Naval Research (N00014-92-J-1309); Air Force Office of Scientific Research (F49620-92-J-0499); National Science Foundation (IRI 90-24877

    A neural network-based exploratory learning and motor planning system for co-robots

    Get PDF
    Collaborative robots, or co-robots, are semi-autonomous robotic agents designed to work alongside humans in shared workspaces. To be effective, co-robots require the ability to respond and adapt to dynamic scenarios encountered in natural environments. One way to achieve this is through exploratory learning, or "learning by doing," an unsupervised method in which co-robots are able to build an internal model for motor planning and coordination based on real-time sensory inputs. In this paper, we present an adaptive neural network-based system for co-robot control that employs exploratory learning to achieve the coordinated motor planning needed to navigate toward, reach for, and grasp distant objects. To validate this system we used the 11-degrees-of-freedom RoPro Calliope mobile robot. Through motor babbling of its wheels and arm, the Calliope learned how to relate visual and proprioceptive information to achieve hand-eye-body coordination. By continually evaluating sensory inputs and externally provided goal directives, the Calliope was then able to autonomously select the appropriate wheel and joint velocities needed to perform its assigned task, such as following a moving target or retrieving an indicated object
    corecore