'Columbia University Libraries/Information Services'
Doi
Abstract
Researchers have made tremendous advances in robotic grasping in the past decades. On the hardware side, a lot of robot hand designs were proposed, covering a large spectrum of dexterity (from simple parallel grippers to anthropomorphic hands), actuation (from underactuated to fully actuated), and sensing capabilities (from only open/close states to tactile sensing). On the software side, grasping techniques also evolved significantly, from open-loop control, classical feedback control, to learning-based policies. However, most of the studies and applications follow the one-way paradigm that mechanical engineers/researchers design the hardware first and control/learning experts write the code to use the hand. In contrast, we aim to study the interplay between the mechanical and computational aspects in robotic grasping. We believe both sides are important but cannot solve grasping problems on their own, and both sides are highly connected by the laws of physics and should not be developed separately. We use the term "Mechanical Intelligence" to refer to the ability realized by mechanisms to appropriately respond to the external inputs, and we show that incorporating Mechanical Intelligence with Computational Intelligence is beneficial for grasping.
The first part of this thesis is to derive hand underactuation mechanisms from grasp data. The mechanical coordination in robot hands, which is one type of Mechanical Intelligence, corresponds to the concept of dimensionality reduction in Machine Learning. However, the resulted low-dimensional manifolds need to be realizable using underactuated mechanisms. In this project, we first collect simulated grasp data without accounting for underactuation, apply a dimensionality reduction technique (we term it "Mechanically Realizable Manifolds") considering both pre-contact postural synergies and post-contact joint torque coordination, and finally build robot hands based on the resulted low-dimensional models. We also demonstrate a real-world application on a free-flying robot for the International Space Station.
The second part is about proprioceptive grasping for unknown objects by taking advantage of hand compliance. Mechanical compliance is intrinsically connected to force/torque sensing and control. In this work, we proposed a series-elastic hand providing embodied compliance and proprioception, and an associated grasping policy using a network of proportional-integral controllers. We show that, without any prior model of the object and with only proprioceptive sensing, a robot hand can make stable grasps in a reactive fashion.
The last part is about developing the Mechanical and Computational Intelligence jointly --- to co-optimize the mechanisms and control policies using deep Reinforcement Learning (RL). Traditional RL treats robot hardware as immutable and models it as part of the environment. In contrast, we move the robot hardware out of the environment, express its mechanics as auto-differentiable physics and connect it with the computational policy to create a unified policy (we term this method "Hardware as Policy"), which allows RL algorithms to back-propagate gradients w.r.t both hardware and computational parameters and optimize them in the same fashion. We present a mass-spring toy problem to illustrate this idea, and also a real-world design case of an underactuated hand.
The three projects we present in this thesis are meaningful examples to demonstrate the interplay between the mechanical and computational aspects of robotic grasping. In the Conclusion part, we summarize some high-level philosophies and suggestions to integrate Mechanical and Computational Intelligence, as well as the high-level challenges that still exist when pushing this area forward