69,913 research outputs found

    MOTION CONTROL SIMULATION OF A HEXAPOD ROBOT

    Get PDF
    This thesis addresses hexapod robot motion control. Insect morphology and locomotion patterns inform the design of a robotic model, and motion control is achieved via trajectory planning and bio-inspired principles. Additionally, deep learning and multi-agent reinforcement learning are employed to train the robot motion control strategy with leg coordination achieves using a multi-agent deep reinforcement learning framework. The thesis makes the following contributions: First, research on legged robots is synthesized, with a focus on hexapod robot motion control. Insect anatomy analysis informs the hexagonal robot body and three-joint single robotic leg design, which is assembled using SolidWorks. Different gaits are studied and compared, and robot leg kinematics are derived and experimentally verified, culminating in a three-legged gait for motion control. Second, an animal-inspired approach employs a central pattern generator (CPG) control unit based on the Hopf oscillator, facilitating robot motion control in complex environments such as stable walking and climbing. The robot\u27s motion process is quantitatively evaluated in terms of displacement change and body pitch angle. Third, a value function decomposition algorithm, QPLEX, is applied to hexapod robot motion control. The QPLEX architecture treats each leg as a separate agent with local control modules, that are trained using reinforcement learning. QPLEX outperforms decentralized approaches, achieving coordinated rhythmic gaits and increased robustness on uneven terrain. The significant of terrain curriculum learning is assessed, with QPLEX demonstrating superior stability and faster consequence. The foot-end trajectory planning method enables robot motion control through inverse kinematic solutions but has limited generalization capabilities for diverse terrains. The animal-inspired CPG-based method offers a versatile control strategy but is constrained to core aspects. In contrast, the multi-agent deep reinforcement learning-based approach affords adaptable motion strategy adjustments, rendering it a superior control policy. These methods can be combined to develop a customized robot motion control policy for specific scenarios

    Realisation of an energy efficient walking robot

    Get PDF
    In this video the walking robot ‘Dribbel’ is presented, which has been built at the Control Engineering group of the University of Twente, the Netherlands. This robot has been designed with a focus on minimal energy consumption, using a passive dynamic approach. It is a so-called four-legged 2D walker; the use of four legs prevents it from falling sideways. During the design phase extensive use has been made of 20-sim. This power port based modeling package was used to simulate the dynamic behaviour of the robot in order to estimate the design parameters for the prototype. The parameters obtained by the simulation were then used as a basis for the real robot. The real robot is made of aluminum and weighs 9.5 kg. Each of the nine joints (one hip, four knees, four feet) has a dedicated electronic driver board for interfacing the joint sensors. For walking a simple control loop is used: when the front feet touch the ground, the rear legs are swung forward. The control parameters can be adjusted online using a serial link. Using this simple control loop, the robot walks at a speed of 1.2 km/h and a step frequency of 1.1 Hz. The hip actuator consumes 6.7 W. The walking behaviour of the robot is very similar to the simulation, regarding both walking motion and power consumption. With the serial link real time data acquisition in the simulation package (running on the PC) is possible. This allows for advanced verification and fine tuning of the control algorithm. The simulation package can also be used directly within the control loop. Future research is planned on energy based control of the walking motion, using impedance control for the hip actuator and design of more advanced (and actuated) foot shapes

    A Framework of Hybrid Force/Motion Skills Learning for Robots

    Get PDF
    Human factors and human-centred design philosophy are highly desired in today’s robotics applications such as human-robot interaction (HRI). Several studies showed that endowing robots of human-like interaction skills can not only make them more likeable but also improve their performance. In particular, skill transfer by imitation learning can increase usability and acceptability of robots by the users without computer programming skills. In fact, besides positional information, muscle stiffness of the human arm, contact force with the environment also play important roles in understanding and generating human-like manipulation behaviours for robots, e.g., in physical HRI and tele-operation. To this end, we present a novel robot learning framework based on Dynamic Movement Primitives (DMPs), taking into consideration both the positional and the contact force profiles for human-robot skills transferring. Distinguished from the conventional method involving only the motion information, the proposed framework combines two sets of DMPs, which are built to model the motion trajectory and the force variation of the robot manipulator, respectively. Thus, a hybrid force/motion control approach is taken to ensure the accurate tracking and reproduction of the desired positional and force motor skills. Meanwhile, in order to simplify the control system, a momentum-based force observer is applied to estimate the contact force instead of employing force sensors. To deploy the learned motion-force robot manipulation skills to a broader variety of tasks, the generalization of these DMP models in actual situations is also considered. Comparative experiments have been conducted using a Baxter Robot to verify the effectiveness of the proposed learning framework on real-world scenarios like cleaning a table
    • …
    corecore