3,663 research outputs found

    Current sensing feedback for humanoid stability

    Get PDF
    For humanoid robots to function in changing environments, they must be able to maintain balance similar to human beings. At present, humanoids recover from pushes by the use of either the ankles or hips and a rigid body. This method has been proven to work, but causes excessive strain on the joints of the robot and does not maximize on the capabilities of a humanlike body. The focus of this paper is to enable advanced dynamic balancing through torque classification and balance improving positional changes. For the robot to be able to balance dynamically, external torques must be determined accurately. The proposed method of this paper uses current sensing feedback at the humanoids power source to classify external torques. Through understanding the current draw of each joint, an external torque can be modeled. After being modeled, the external torque can be nullified with balancing techniques. Current sensing has the advantage that it adds detailed feedback while requiring small adjustments to the robot. Also, current sensing minimizes additional sensors, cost, and weight to the robot. Current sensing technology lies between the power supply and drive motors, thus can be implement without altering the robot. After an external torque has been modeled, the robot will undertake balancing positions to reduce the instability. The specialized positions increase the robot\u27s balance while reducing the workload of each joint. The balancing positions incorporate the humanlike body of the robot and torque from each of the leg servos. The best balancing positions were generated with a genetic algorithm and simulated in Webots. The simulation environment provided an accurate physical model and physics engine. The genetic algorithm reduced the workload of searching the workspace of a robot with ten degrees of freedom below the waist. The current sensing theory was experimentally tested on the TigerBot, a humanoid produced by the Rochester Institute of Technology (RIT). The TigerBot has twenty three degrees of freedom that fully simulate human motion. The robot stands at thirty-one inches tall and weighs close to nine pounds. The legs of the robot have six degrees of freedom per leg, which fully mimics the human leg. The robot was awarded first place in the 2012 IEEE design competition for innovation in New York

    Design and Development of a Vision System Interface for Three Degree of Freedom Agricultural Robot

    Get PDF
    In this study, a vision system interfaced 3DOF agricultural harvester robot was designed, developed and tested. The robot was actuated by hydraulic power for heavy tasks such as picking and harvesting oil palm FFB. The design was based on the task of that robot, the type of actuators and on the overall size. Attention was given to the stability, portability and kinematic simplicity in relation to the hydraulic actuators. The derivation of the kinematic model was based on the Matrix Algebra for the forward kinematics, and the inverse kinematics problem was based on analytical formulation. The D-H representation was used to carry out the coordinates of the end-effector as the function of the joint angles. The joint angles of the robot were computed as the function of the end-effector coordinates to achieve the inverse kinematic model. A mathematical model that related the joint angles and the actuators length was derived using geometric and trigonometric formulations. A differential system was derived for the manipulator. This differential system represents the dynamic model, which describes relationships between robot motion and forces causing that motion. The Lagrange-Euler formulation with the D-H representation was applied to formulate the differential system. The importance of the derivation of the kinematic model arises in the development of the control strategy. While the derivation of the dynamic model helps in real time simulation. The robot was enhanced by a CCD camera as a vision sensor to recognise red object as a target. Red object was to exemplify the matured oil palm FFB . The recognition process was achieved by using C++ programming language enhanced by MIL functions. An algorithm based on empirical results was developed in order to convert the target coordinates from the image plane (pixel) into the robot plane (cm). The image plane is two-dimensional while the robot plane is three-dimensional. Thus at least one coordinate of the target in the robot plane should be known. An Interface program has been developed using Visual Basics to control and simulate 2D motion of the manipulator

    Robot Simulation for Control Design

    Get PDF

    Book announcements

    Get PDF
    Podeu consultar la versió en castellà a: http://hdl.handle.net/11703/10236

    Development of advanced control schemes for telerobot manipulators

    Get PDF
    To study space applications of telerobotics, Goddard Space Flight Center (NASA) has recently built a testbed composed mainly of a pair of redundant slave arms having seven degrees of freedom and a master hand controller system. The mathematical developments required for the computerized simulation study and motion control of the slave arms are presented. The slave arm forward kinematic transformation is presented which is derived using the D-H notation and is then reduced to its most simplified form suitable for real-time control applications. The vector cross product method is then applied to obtain the slave arm Jacobian matrix. Using the developed forward kinematic transformation and quaternions representation of the slave arm end-effector orientation, computer simulation is conducted to evaluate the efficiency of the Jacobian in converting joint velocities into Cartesian velocities and to investigate the accuracy of the Jacobian pseudo-inverse for various sampling times. In addition, the equivalence between Cartesian velocities and quaternion is also verified using computer simulation. The motion control of the slave arm is examined. Three control schemes, the joint-space adaptive control scheme, the Cartesian adaptive control scheme, and the hybrid position/force control scheme are proposed for controlling the motion of the slave arm end-effector. Development of the Cartesian adaptive control scheme is presented and some preliminary results of the remaining control schemes are presented and discussed

    Deep Reinforcement Learning for Tensegrity Robot Locomotion

    Full text link
    Tensegrity robots, composed of rigid rods connected by elastic cables, have a number of unique properties that make them appealing for use as planetary exploration rovers. However, control of tensegrity robots remains a difficult problem due to their unusual structures and complex dynamics. In this work, we show how locomotion gaits can be learned automatically using a novel extension of mirror descent guided policy search (MDGPS) applied to periodic locomotion movements, and we demonstrate the effectiveness of our approach on tensegrity robot locomotion. We evaluate our method with real-world and simulated experiments on the SUPERball tensegrity robot, showing that the learned policies generalize to changes in system parameters, unreliable sensor measurements, and variation in environmental conditions, including varied terrains and a range of different gravities. Our experiments demonstrate that our method not only learns fast, power-efficient feedback policies for rolling gaits, but that these policies can succeed with only the limited onboard sensing provided by SUPERball's accelerometers. We compare the learned feedback policies to learned open-loop policies and hand-engineered controllers, and demonstrate that the learned policy enables the first continuous, reliable locomotion gait for the real SUPERball robot. Our code and other supplementary materials are available from http://rll.berkeley.edu/drl_tensegrityComment: International Conference on Robotics and Automation (ICRA), 2017. Project website link is http://rll.berkeley.edu/drl_tensegrit

    An intelligent, free-flying robot

    Get PDF
    The ground based demonstration of the extensive extravehicular activity (EVA) Retriever, a voice-supervised, intelligent, free flying robot, is designed to evaluate the capability to retrieve objects (astronauts, equipment, and tools) which have accidentally separated from the Space Station. The major objective of the EVA Retriever Project is to design, develop, and evaluate an integrated robotic hardware and on-board software system which autonomously: (1) performs system activation and check-out; (2) searches for and acquires the target; (3) plans and executes a rendezvous while continuously tracking the target; (4) avoids stationary and moving obstacles; (5) reaches for and grapples the target; (6) returns to transfer the object; and (7) returns to base

    An overview of artificial intelligence and robotics. Volume 2: Robotics

    Get PDF
    This report provides an overview of the rapidly changing field of robotics. The report incorporates definitions of the various types of robots, a summary of the basic concepts, utilized in each of the many technical areas, review of the state of the art and statistics of robot manufacture and usage. Particular attention is paid to the status of robot development, the organizations involved, their activities, and their funding

    On Sensor-Controlled Robotized One-off Manufacturing

    Get PDF
    A semi-automatic task oriented system structure has been developed and tested on an arc welding application. In normal industrial robot programming, the path is created and the process is based upon the decided path. Here a process-oriented method is proposed instead. It is natural to focus on the process, since the path is in reality a result of process needs. Another benefit of choosing process focus, is that it automatically leads us into task oriented thoughts, which in turn can be split in sub-tasks, one for each part of the process with similar process-characteristics. By carefully choosing and encapsulating the information needed to execute a sub-task, this component can be re-used whenever the actual subtask occurs. By using virtual sensors and generic interfaces to robots and sensors, applications built upon the system design do not change between simulation and actual shop floor runs. The system allows a mix of real- and simulated components during simulation and run-time
    corecore