1,143 research outputs found

    Forward Kinematic Modelling with Radial Basis Function Neural Network Tuned with a Novel Meta-Heuristic Algorithm for Robotic Manipulators

    Get PDF
    The complexity of forward kinematic modelling increases with the increase in the degrees of freedom for a manipulator. To reduce the computational weight and time lag for desired output transformation, this paper proposes a forward kinematic model mapped with the help of the Radial Basis Function Neural Network (RBFNN) architecture tuned by a novel meta-heuristic algorithm, namely, the Cooperative Search Optimisation Algorithm (CSOA). The architecture presented is able to automatically learn the kinematic properties of the manipulator. Learning is accomplished iteratively based only on the observation of the input–output relationship. Related simulations are carried out on a 3-Degrees of Freedom (DOF) manipulator on the Robot Operating System (ROS). The dataset created from the simulation is divided 65–35 for training–testing of the proposed model. The metrics used for model validation include spread value, cost and runtime for the training dataset, and Mean Relative Error, Normal Mean Square Error, and Mean Absolute Error for the testing dataset. A comparative analysis of the CSOA-RBFNN model is performed with an artificial neural network, support vector regression model, and with with other meta-heuristic RBFNN models, i.e., PSORBFNN and GWO-RBFNN, that show the effectiveness and superiority of the proposed technique.publishedVersio

    From visuomotor control to latent space planning for robot manipulation

    Get PDF
    Deep visuomotor control is emerging as an active research area for robot manipulation. Recent advances in learning sensory and motor systems in an end-to-end manner have achieved remarkable performance across a range of complex tasks. Nevertheless, a few limitations restrict visuomotor control from being more widely adopted as the de facto choice when facing a manipulation task on a real robotic platform. First, imitation learning-based visuomotor control approaches tend to suffer from the inability to recover from an out-of-distribution state caused by compounding errors. Second, the lack of versatility in task definition limits skill generalisability. Finally, the training data acquisition process and domain transfer are often impractical. In this thesis, individual solutions are proposed to address each of these issues. In the first part, we find policy uncertainty to be an effective indicator of potential failure cases, in which the robot is stuck in out-of-distribution states. On this basis, we introduce a novel uncertainty-based approach to detect potential failure cases and a recovery strategy based on action-conditioned uncertainty predictions. Then, we propose to employ visual dynamics approximation to our model architecture to capture the motion of the robot arm instead of the static scene background, making it possible to learn versatile skill primitives. In the second part, taking inspiration from the recent progress in latent space planning, we propose a gradient-based optimisation method operating within the latent space of a deep generative model for motion planning. Our approach bypasses the traditional computational challenges encountered by established planning algorithms, and has the capability to specify novel constraints easily and handle multiple constraints simultaneously. Moreover, the training data comes from simple random motor-babbling of kinematically feasible robot states. Our real-world experiments further illustrate that our latent space planning approach can handle both open and closed-loop planning in challenging environments such as heavily cluttered or dynamic scenes. This leads to the first, to our knowledge, closed-loop motion planning algorithm that can incorporate novel custom constraints, and lays the foundation for more complex manipulation tasks

    Model-based recurrent neural network for redundancy resolution of manipulator with remote centre of motion constraints

    Get PDF
    Redundancy resolution is a critical issue to achieve accurate kinematic control for manipulators. End-effectors of manipulators can track desired paths well with suitable resolved joint variables. In some manipulation applications such as selecting insertion paths to thrill through a set of points, it requires the distal link of a manipulator to translate along such fixed point and then perform manipulation tasks. The point is known as remote centre of motion (RCM) to constrain motion planning and kinematic control of manipulators. Together with its end-effector finishing path tracking tasks, the redundancy resolution of a manipulators has to maintain RCM to produce reliable resolved joint angles. However, current existing redundancy resolution schemes on manipulators based on recurrent neural networks (RNNs) mainly are focusing on unrestricted motion without RCM constraints considered. In this paper, an RNN-based approach is proposed to solve the redundancy resolution issue with RCM constraints, developing a new general dynamic optimisation formulation containing the RCM constraints. Theoretical analysis shows the theoretical derivation and convergence of the proposed RNN for redundancy resolution of manipulators with RCM constraints. Simulation results further demonstrate the efficiency of the proposed method in end-effector path tracking control under RCM constraints based on an industrial redundant manipulator system

    Approximation of the inverse kinematics of a robotic manipulator using a neural network

    Get PDF
    A fundamental property of a robotic manipulator system is that it is capable of accurately following complex position trajectories in three-dimensional space. An essential component of the robotic control system is the solution of the inverse kinematics problem which allows determination of the joint angle trajectories from the desired trajectory in the Cartesian space. There are several traditional methods based on the known geometry of robotic manipulators to solve the inverse kinematics problem. These methods can become impractical in a robot-vision control system where the environmental parameters can alter. Artificial neural networks with their inherent learning ability can approximate the inverse kinematics function and do not require any knowledge of the manipulator geometry. This thesis concentrates on developing a practical solution using a radial basis function network to approximate the inverse kinematics of a robot manipulator. This approach is distinct from existing approaches as the centres of the hidden-layer units are regularly distributed in the workspace, constrained training data is used and the training phase is performed using either the strict interpolation or the least mean square algorithms. An online retraining approach is also proposed to modify the network function approximation to cope with the situation where the initial training and application environments are different. Simulation results for two and three-link manipulators verify the approach. A novel real-time visual measurement system, based on a video camera and image processing software, has been developed to measure the position of the robotic manipulator in the three-dimensional workspace. Practical experiments have been performed with a Mitsubishi PA10-6CE manipulator and this visual measurement system. The performance of the radial basis function network is analysed for the manipulator operating in two and three-dimensional space and the practical results are compared to the simulation results. Advantages and disadvantages of the proposed approach are discussed

    Learning to represent surroundings, anticipate motion and take informed actions in unstructured environments

    Get PDF
    Contemporary robots have become exceptionally skilled at achieving specific tasks in structured environments. However, they often fail when faced with the limitless permutations of real-world unstructured environments. This motivates robotics methods which learn from experience, rather than follow a pre-defined set of rules. In this thesis, we present a range of learning-based methods aimed at enabling robots, operating in dynamic and unstructured environments, to better understand their surroundings, anticipate the actions of others, and take informed actions accordingly

    Real-Time Hybrid Visual Servoing of a Redundant Manipulator via Deep Reinforcement Learning

    Get PDF
    Fixtureless assembly may be necessary in some manufacturing tasks and environ-ments due to various constraints but poses challenges for automation due to non-deterministic characteristics not favoured by traditional approaches to industrial au-tomation. Visual servoing methods of robotic control could be effective for sensitive manipulation tasks where the desired end-effector pose can be ascertained via visual cues. Visual data is complex and computationally expensive to process but deep reinforcement learning has shown promise for robotic control in vision-based manipu-lation tasks. However, these methods are rarely used in industry due to the resources and expertise required to develop application-specific systems and prohibitive train-ing costs. Training reinforcement learning models in simulated environments offers a number of benefits for the development of robust robotic control algorithms by reducing training time and costs, and providing repeatable benchmarks for which algorithms can be tested, developed and eventually deployed on real robotic control environments. In this work, we present a new simulated reinforcement learning envi-ronment for developing accurate robotic manipulation control systems in fixtureless environments. Our environment incorporates a contemporary collaborative industrial robot, the KUKA LBR iiwa, with the goal of positioning its end effector in a generic fixtureless environment based on a visual cue. Observational inputs are comprised of the robotic joint positions and velocities, as well as two cameras, whose positioning reflect hybrid visual servoing with one camera attached to the robotic end-effector, and another observing the workspace respectively. We propose a state-of-the-art deep reinforcement learning approach to solving the task environment and make prelimi-nary assessments of the efficacy of this approach to hybrid visual servoing methods for the defined problem environment. We also conduct a series of experiments ex-ploring the hyperparameter space in the proposed reinforcement learning method. Although we could not prove the efficacy of a deep reinforcement approach to solving the task environment with our initial results, we remain confident that such an ap-proach could be feasible to solving this industrial manufacturing challenge and that our contributions in this work in terms of the novel software provide a good basis for the exploration of reinforcement learning approaches to hybrid visual servoing in accurate manufacturing contexts

    Derivative-free online learning of inverse dynamics models

    Full text link
    This paper discusses online algorithms for inverse dynamics modelling in robotics. Several model classes including rigid body dynamics (RBD) models, data-driven models and semiparametric models (which are a combination of the previous two classes) are placed in a common framework. While model classes used in the literature typically exploit joint velocities and accelerations, which need to be approximated resorting to numerical differentiation schemes, in this paper a new `derivative-free' framework is proposed that does not require this preprocessing step. An extensive experimental study with real data from the right arm of the iCub robot is presented, comparing different model classes and estimation procedures, showing that the proposed `derivative-free' methods outperform existing methodologies.Comment: 14 pages, 11 figure
    corecore