57 research outputs found

    Mechanical engineering challenges in humanoid robotics

    Get PDF
    Thesis (S.B.)--Massachusetts Institute of Technology, Dept. of Mechanical Engineering, 2011.Cataloged from PDF version of thesis.Includes bibliographical references (p. 36-39).Humanoid robots are artificial constructs designed to emulate the human body in form and function. They are a unique class of robots whose anthropomorphic nature renders them particularly well-suited to interact with humans in a world designed for humans. The present work examines a subset of the plethora of engineering challenges that face modem developers of humanoid robots, with a focus on challenges that fall within the domain of mechanical engineering. The challenge of emulating human bipedal locomotion on a robotic platform is reviewed in the context of the evolutionary origins of human bipedalism and the biomechanics of walking and running. Precise joint angle control bipedal robots and passive-dynamic walkers, the two most prominent classes of modem bipedal robots, are found to have their own strengths and shortcomings. An integration of the strengths from both classes is likely to characterize the next generation of humanoid robots. The challenge of replicating human arm and hand dexterity with a robotic system is reviewed in the context of the evolutionary origins and kinematic structure of human forelimbs. Form-focused design and function-focused design, two distinct approaches to the design of modem robotic arms and hands, are found to have their own strengths and shortcomings. An integration of the strengths from both approaches is likely to characterize the next generation of humanoid robots.by Peter Guang Yi Lu.S.B

    Design of a Multimodal Fingertip Sensor for Dynamic Manipulation

    Full text link
    We introduce a spherical fingertip sensor for dynamic manipulation. It is based on barometric pressure and time-of-flight proximity sensors and is low-latency, compact, and physically robust. The sensor uses a trained neural network to estimate the contact location and three-axis contact forces based on data from the pressure sensors, which are embedded within the sensor's sphere of polyurethane rubber. The time-of-flight sensors face in three different outward directions, and an integrated microcontroller samples each of the individual sensors at up to 200 Hz. To quantify the effect of system latency on dynamic manipulation performance, we develop and analyze a metric called the collision impulse ratio and characterize the end-to-end latency of our new sensor. We also present experimental demonstrations with the sensor, including measuring contact transitions, performing coarse mapping, maintaining a contact force with a moving object, and reacting to avoid collisions.Comment: 6 pages, 2 pages of references, supplementary video at https://youtu.be/HGSdcW_aans. Submitted to ICRA 202

    Sensors for Robotic Hands: A Survey of State of the Art

    Get PDF
    Recent decades have seen significant progress in the field of artificial hands. Most of the surveys, which try to capture the latest developments in this field, focused on actuation and control systems of these devices. In this paper, our goal is to provide a comprehensive survey of the sensors for artificial hands. In order to present the evolution of the field, we cover five year periods starting at the turn of the millennium. At each period, we present the robot hands with a focus on their sensor systems dividing them into categories, such as prosthetics, research devices, and industrial end-effectors.We also cover the sensors developed for robot hand usage in each era. Finally, the period between 2010 and 2015 introduces the reader to the state of the art and also hints to the future directions in the sensor development for artificial hands

    The implications of embodiment for behavior and cognition: animal and robotic case studies

    Full text link
    In this paper, we will argue that if we want to understand the function of the brain (or the control in the case of robots), we must understand how the brain is embedded into the physical system, and how the organism interacts with the real world. While embodiment has often been used in its trivial meaning, i.e. 'intelligence requires a body', the concept has deeper and more important implications, concerned with the relation between physical and information (neural, control) processes. A number of case studies are presented to illustrate the concept. These involve animals and robots and are concentrated around locomotion, grasping, and visual perception. A theoretical scheme that can be used to embed the diverse case studies will be presented. Finally, we will establish a link between the low-level sensory-motor processes and cognition. We will present an embodied view on categorization, and propose the concepts of 'body schema' and 'forward models' as a natural extension of the embodied approach toward first representations.Comment: Book chapter in W. Tschacher & C. Bergomi, ed., 'The Implications of Embodiment: Cognition and Communication', Exeter: Imprint Academic, pp. 31-5

    Reactive Motions In A Fully Autonomous CRS Catalyst 5 Robotic Arm Based On RGBD Data

    Get PDF
    This study proposes a method to perform velocity estimation using motion blur in a single image frame along x and y axes in the camera coordinate system and intercept a moving object with a robotic arm. It will be shown that velocity estimation in a single image frame improves the system\u27s performance. The majority of previous studies in this area require at least two image frames to measure the target\u27s velocity. In addition, they mostly employ specialized equipments which are able to generate high torques and accelerations. The setup consists of a 5 degree of freedom robotic arm and a Kinect camera. The RGBD (Red, Green, Blue and Depth) camera provides the RGB and depth information which are used to detect the position of the target. As the object is moving within a single image frame, the image contains motion blur. To recognize and differentiate the object from blurred area, the image intensity profiles are studied. Accordingly, the method determines the blur parameters based on the changes in the intensity profile. The aforementioned blur parameters are the length of the object and the length of the partial blur. Based on motion blur, the velocities along x and y camera coordinate axes are estimated. However, as the depth frame cannot record motion blur, the velocity along axis in the camera coordinate frame is initially unknown. The vectors of position and velocity are transformed into world coordinate frame and subsequently, the prospective position of the object, after a predefined time interval, is predicted. In order to intercept, the end-effector of the robotic arm must reach this predicted position within the time interval as well. For the end-effector to reach the predicted position within the predefined time interval, the robot\u27s joint angles and accelerations are determined through inverse kinematic methods. Then the robotic arm starts its motion. Once the second depth frame is obtained, the object\u27s velocity along z axis can be calculated as well. Accordingly, the predicted position of the object is recalculated, and the motion of the manipulator is modified. The proposed method is compared with existing methods which need at least two image frames to estimate the velocity of the target. It is shown that under identical kinematic conditions, the functionality of the system is improved by times for our setup. In addition, the experiment is repeated for times and the velocity data is recorded. According to the experimental results, there are two major limitations in our system and setup. The system cannot determine the velocity along z in the camera coordinate system from the initial image frame. Consequently, if the object travels faster along this axis, it becomes more susceptible to failure. In addition, our manipulator is an unspecialized equipment which is not designed for producing high torques and accelerations. Accordingly, the task becomes more challenging. The main cause of error in the experiments was operator\u27s. It is necessary to have the object pass through the working volume of the robot. Besides, the object must be still inside the working volume after the predefined time interval. It is possible that the operator throw the object within the designated working volume, but it leaves it earlier than the specified time interval

    Nonprehensile Manipulation via Multisensory Learning from Demonstration

    Get PDF
    Dexterous manipulation problem concerns control of a robot hand to manipulate an object in a desired manner. While classical dexterous manipulation strategies are based on stable grasping (or force closure), many human-like manipulation tasks do not maintain grasp stability, and often utilize the intrinsic dynamics of the object rather than the closed form of kinematic relation between the object and the robotic fingers. Such manipulation strategies are referred as nonprehensile or dynamic dexterous manipulation in the literature. Nonprehensile manipulation typically involves fast and agile movements such as throwing and flipping. Due to the complexity of such motions (which may involve impulsive dynamics) and uncertainties associated with them, it has been challenging to realize nonprehensile manipulation tasks in a reliable way. In this paper, we propose a new control strategy to realize practical nonprehensile manipulation tasks using a robot hand. The main idea of our control strategy are two-folds. Firstly, we make explicit use of multiple modalities of sensory data for the design of control law. Specifically, force data is employed for feedforward control while the position data is used for feedback (i.e. reactive) control. Secondly, control signals (both feedback and feedforward) are obtained by the multisensory learning from demonstration (LfD) experiments which are designed and performed for specific nonprehensile manipulation tasks in concern. We utilize various LfD frameworks such as Gaussian mixture model and Gaussian mixture regression (GMM/GMR) and hidden Markov model and GMR (HMM/GMR) to reproduce generalized motion profiles from the human expert's demonstrations. The proposed control strategy has been verified by experimental results on dynamic spinning task using a sensory-rich two-finger robotic hand. The control performance (i.e. the speed and accuracy of the spinning task) has also been compared with that of the classical dexterous manipulation based on finger gating
    corecore