38,113 research outputs found

    Neural Networks in Mobile Robot Motion

    Get PDF
    This paper deals with a path planning and intelligent control of an autonomous robot which should move safely in partially structured environment. This environment may involve any number of obstacles of arbitrary shape and size; some of them are allowed to move. We describe our approach to solving the motion-planning problem in mobile robot control using neural networks-based technique. Our method of the construction of a collision-free path for moving robot among obstacles is based on two neural networks. The first neural network is used to determine the "free" space using ultrasound range finder data. The second neural network "finds" a safe direction for the next robot section of the path in the workspace while avoiding the nearest obstacles. Simulation examples of generated path with proposed techniques will be presented.Comment: 9 Page

    Deep neural network approach in human-like redundancy optimization for anthropomorphic manipulators

    Get PDF
    © 2013 IEEE. Human-like behavior has emerged in the robotics area for improving the quality of Human-Robot Interaction (HRI). For the human-like behavior imitation, the kinematic mapping between a human arm and robot manipulator is one of the popular solutions. To fulfill this requirement, a reconstruction method called swivel motion was adopted to achieve human-like imitation. This approach aims at modeling the regression relationship between robot pose and swivel motion angle. Then it reaches the human-like swivel motion using its redundant degrees of the manipulator. This characteristic holds for most of the redundant anthropomorphic robots. Although artificial neural network (ANN) based approaches show moderate robustness, the predictive performance is limited. In this paper, we propose a novel deep convolutional neural network (DCNN) structure for reconstruction enhancement and reducing online prediction time. Finally, we utilized the trained DCNN model for managing redundancy control a 7 DoFs anthropomorphic robot arm (LWR4+, KUKA, Germany) for validation. A demonstration is presented to show the human-like behavior on the anthropomorphic manipulator. The proposed approach can also be applied to control other anthropomorphic robot manipulators in industry area or biomedical engineering

    Neural Potential Field for Obstacle-Aware Local Motion Planning

    Full text link
    Model predictive control (MPC) may provide local motion planning for mobile robotic platforms. The challenging aspect is the analytic representation of collision cost for the case when both the obstacle map and robot footprint are arbitrary. We propose a Neural Potential Field: a neural network model that returns a differentiable collision cost based on robot pose, obstacle map, and robot footprint. The differentiability of our model allows its usage within the MPC solver. It is computationally hard to solve problems with a very high number of parameters. Therefore, our architecture includes neural image encoders, which transform obstacle maps and robot footprints into embeddings, which reduce problem dimensionality by two orders of magnitude. The reference data for network training are generated based on algorithmic calculation of a signed distance function. Comparative experiments showed that the proposed approach is comparable with existing local planners: it provides trajectories with outperforming smoothness, comparable path length, and safe distance from obstacles. Experiment on Husky UGV mobile robot showed that our approach allows real-time and safe local planning. The code for our approach is presented at https://github.com/cog-isa/NPField together with demo video

    Motion Planning and Control of Dynamic Humanoid Locomotion

    Get PDF
    Inspired by human, humanoid robots has the potential to become a general-purpose platform that lives along with human. Due to the technological advances in many field, such as actuation, sensing, control and intelligence, it finally enables humanoid robots to possess human comparable capabilities. However, humanoid locomotion is still a challenging research field. The large number of degree of freedom structure makes the system difficult to coordinate online. The presence of various contact constraints and the hybrid nature of locomotion tasks make the planning a harder problem to solve. Template model anchoring approach has been adopted to bridge the gap between simple model behavior and the whole-body motion of humanoid robot. Control policies are first developed for simple template models like Linear Inverted Pendulum Model (LIPM) or Spring Loaded Inverted Pendulum(SLIP), the result controlled behaviors are then been mapped to the whole-body motion of humanoid robot through optimization-based task-space control strategies. Whole-body humanoid control framework has been verified on various contact situations such as unknown uneven terrain, multi-contact scenarios and moving platform and shows its generality and versatility. For walking motion, existing Model Predictive Control approach based on LIPM has been extended to enable the robot to walk without any reference foot placement anchoring. It is kind of discrete version of \u201cwalking without thinking\u201d. As a result, the robot could achieve versatile locomotion modes such as automatic foot placement with single reference velocity command, reactive stepping under large external disturbances, guided walking with small constant external pushing forces, robust walking on unknown uneven terrain, reactive stepping in place when blocked by external barrier. As an extension of this proposed framework, also to increase the push recovery capability of the humanoid robot, two new configurations have been proposed to enable the robot to perform cross-step motions. For more dynamic hopping and running motion, SLIP model has been chosen as the template model. Different from traditional model-based analytical approach, a data-driven approach has been proposed to encode the dynamics of the this model. A deep neural network is trained offline with a large amount of simulation data based on the SLIP model to learn its dynamics. The trained network is applied online to generate reference foot placements for the humanoid robot. Simulations have been performed to evaluate the effectiveness of the proposed approach in generating bio-inspired and robust running motions. The method proposed based on 2D SLIP model can be generalized to 3D SLIP model and the extension has been briefly mentioned at the end

    Words into Action: Learning Diverse Humanoid Robot Behaviors using Language Guided Iterative Motion Refinement

    Full text link
    Humanoid robots are well suited for human habitats due to their morphological similarity, but developing controllers for them is a challenging task that involves multiple sub-problems, such as control, planning and perception. In this paper, we introduce a method to simplify controller design by enabling users to train and fine-tune robot control policies using natural language commands. We first learn a neural network policy that generates behaviors given a natural language command, such as "walk forward", by combining Large Language Models (LLMs), motion retargeting, and motion imitation. Based on the synthesized motion, we iteratively fine-tune by updating the text prompt and querying LLMs to find the best checkpoint associated with the closest motion in history. We validate our approach using a simulated Digit humanoid robot and demonstrate learning of diverse motions, such as walking, hopping, and kicking, without the burden of complex reward engineering. In addition, we show that our iterative refinement enables us to learn 3x times faster than a naive formulation that learns from scratch

    Use of 3D vision for fine robot motion

    Get PDF
    An integration of 3-D vision systems with robot manipulators will allow robots to operate in a poorly structured environment by visually locating targets and obstacles. However, by using computer vision for objects acquisition makes the problem of overall system calibration even more difficult. Indeed, in a CAD based manipulation a control architecture has to find an accurate mapping between the 3-D Euclidean work space and a robot configuration space (joint angles). If a stereo vision is involved, then one needs to map a pair of 2-D video images directly into the robot configuration space. Neural Network approach aside, a common solution to this problem is to calibrate vision and manipulator independently, and then tie them via common mapping into the task space. In other words, both vision and robot refer to some common Absolute Euclidean Coordinate Frame via their individual mappings. This approach has two major difficulties. First a vision system has to be calibrated over the total work space. And second, the absolute frame, which is usually quite arbitrary, has to be the same with a high degree of precision for both robot and vision subsystem calibrations. The use of computer vision to allow robust fine motion manipulation in a poorly structured world which is currently in progress is described along with the preliminary results and encountered problems

    Use of human gestures for controlling a mobile robot via adaptive CMAC network and fuzzy logic controller

    Get PDF
    Mobile robots with manipulators have been more and more commonly applied in extreme and hostile environments to assist or even replace human operators for complex tasks. In addition to autonomous abilities, mobile robots need to facilitate the human–robot interaction control mode that enables human users to easily control or collaborate with robots. This paper proposes a system which uses human gestures to control an autonomous mobile robot integrating a manipulator and a video surveillance platform. A human user can control the mobile robot just as one drives an actual vehicle in the vehicle’s driving cab. The proposed system obtains human’s skeleton joints information using a motion sensing input device, which is then recognized and interpreted into a set of control commands. This is implemented, based on the availability of training data set and requirement of in-time performance, by an adaptive cerebellar model articulation controller neural network, a finite state machine, a fuzzy controller and purposely designed gesture recognition and control command generation systems. These algorithms work together implement the steering and velocity control of the mobile robot in real-time. The experimental results demonstrate that the proposed approach is able to conveniently control a mobile robot using virtual driving method, with smooth manoeuvring trajectories in various speeds

    Deep Q-Learning for Humanoid Walking

    Get PDF
    Existing methods to allow humanoid robots to walk suffer from a lack of adaptability to new and unexpected environments, due to their reliance on using only higher-level motion control with relatively fixed sub-motions, such as taking an individual step. These conventional methods require significant knowledge of controls and assumptions about the expected surroundings. Humans, however, manage to walk very efficiently and adapt to new environments well due to the learned behaviors. Our approach is to create a reinforcement learning framework that continuously chooses an action to perform, by utilizing a neural network to rate a set of joint values based on the current state of the robot. We successfully train the Boston Dynamics Atlas robot to learn how to walk with this framework
    • …
    corecore