13 research outputs found

    対象物体と指配置のコンフィグレーション空間を用いた不確かさを扱える効率的なケージング計画

    Get PDF
    学位の種別:課程博士University of Tokyo(東京大学

    Grasping for the Task:Human Principles for Robot Hands

    Get PDF
    The significant advances made in the design and construction of anthropomorphic robot hands, endow them with prehensile abilities reaching that of humans. However, using these powerful hands with the same level of expertise that humans display is a big challenge for robots. Traditional approaches use finger-tip (precision) or enveloping (power) methods to generate the best force closure grasps. However, this ignores the variety of prehensile postures available to the hand and also the larger context of arm action. This thesis explores a paradigm for grasp formation based on generating oppositional pressure within the hand, which has been proposed as a functional basis for grasping in humans (MacKenzie and Iberall, 1994). A set of opposition primitives encapsulates the hand's ability to generate oppositional forces. The oppositional intention encoded in a primitive serves as a guide to match the hand to the object, quantify its functional ability and relate this to the arm. In this thesis we leverage the properties of opposition primitives to both interpret grasps formed by humans and to construct grasps for a robot considering the larger context of arm action. In the first part of the thesis we examine the hypothesis that hand representation schemes based on opposition are correlated with hand function. We propose hand-parameters describing oppositional intention and compare these with commonly used methods such as joint angles, joint synergies and shape features. We expect that opposition-based parameterizations, which take an interaction-based perspective of a grasp, are able to discriminate between grasps that are similar in shape but different in functional intent. We test this hypothesis using qualitative assessment of precision and power capabilities found in existing grasp taxonomies. The next part of the thesis presents a general method to recognize oppositional intention manifested in human grasp demonstrations. A data glove instrumented with tactile sensors is used to provide the raw information regarding hand configuration and interaction force. For a grasp combining several cooperating oppositional intentions, hand surfaces can be simultaneously involved in multiple oppositional roles. We characterize the low-level interactions between different surfaces of the hand based on captured interaction force and reconstructed hand surface geometry. This is subsequently used to separate out and prioritize multiple and possibly overlapping oppositional intentions present in the demonstrated grasp. We evaluate our method on several human subjects across a wide range of hand functions. The last part of the thesis applies the properties encoded in opposition primitives to optimize task performance of the arm, for tasks where the arm assumes the dominant role. For these tasks, choosing the strongest power grasp available (from a force-closure sense) may constrain the arm to a sub-optimal configuration. Weaker grasp components impose fewer constraints on the hand, and can therefore explore a wider region of the object relative pose space. We take advantage of this to find the good arm configurations from a task perspective. The final hand-arm configuration is obtained by trading of overall robustness in the grasp with ability of the arm to perform the task. We validate our approach, using the tasks of cutting, hammering, screw-driving and opening a bottle-cap, for both human and robotic hand-arm systems

    Tactile Perception And Visuotactile Integration For Robotic Exploration

    Get PDF
    As the close perceptual sibling of vision, the sense of touch has historically received less than deserved attention in both human psychology and robotics. In robotics, this may be attributed to at least two reasons. First, it suffers from the vicious cycle of immature sensor technology, which causes industry demand to be low, and then there is even less incentive to make existing sensors in research labs easy to manufacture and marketable. Second, the situation stems from a fear of making contact with the environment, avoided in every way so that visually perceived states do not change before a carefully estimated and ballistically executed physical interaction. Fortunately, the latter viewpoint is starting to change. Work in interactive perception and contact-rich manipulation are on the rise. Good reasons are steering the manipulation and locomotion communities’ attention towards deliberate physical interaction with the environment prior to, during, and after a task. We approach the problem of perception prior to manipulation, using the sense of touch, for the purpose of understanding the surroundings of an autonomous robot. The overwhelming majority of work in perception for manipulation is based on vision. While vision is a fast and global modality, it is insufficient as the sole modality, especially in environments where the ambient light or the objects therein do not lend themselves to vision, such as in darkness, smoky or dusty rooms in search and rescue, underwater, transparent and reflective objects, and retrieving items inside a bag. Even in normal lighting conditions, during a manipulation task, the target object and fingers are usually occluded from view by the gripper. Moreover, vision-based grasp planners, typically trained in simulation, often make errors that cannot be foreseen until contact. As a step towards addressing these problems, we present first a global shape-based feature descriptor for object recognition using non-prehensile tactile probing alone. Then, we investigate in making the tactile modality, local and slow by nature, more efficient for the task by predicting the most cost-effective moves using active exploration. To combine the local and physical advantages of touch and the fast and global advantages of vision, we propose and evaluate a learning-based method for visuotactile integration for grasping

    Autonomous Robotic Grasping in Unstructured Environments

    Get PDF
    A crucial problem in robotics is interacting with known or novel objects in unstructured environments. While the convergence of a multitude of research advances is required to address this problem, our goal is to describe a framework that employs the robot\u27s visual perception to identify and execute an appropriate grasp to pick and place novel objects. Analytical approaches explore for solutions through kinematic and dynamic formulations. On the other hand, data-driven methods retrieve grasps according to their prior knowledge of either the target object, human experience, or through information obtained from acquired data. In this dissertation, we propose a framework based on the supporting principle that potential contacting regions for a stable grasp can be found by searching for (i) sharp discontinuities and (ii) regions of locally maximal principal curvature in the depth map. In addition to suggestions from empirical evidence, we discuss this principle by applying the concept of force-closure and wrench convexes. The key point is that no prior knowledge of objects is utilized in the grasp planning process; however, the obtained results show that the approach is capable to deal successfully with objects of different shapes and sizes. We believe that the proposed work is novel because the description of the visible portion of objects by the aforementioned edges appearing in the depth map facilitates the process of grasp set-point extraction in the same way as image processing methods with the focus on small-size 2D image areas rather than clustering and analyzing huge sets of 3D point-cloud coordinates. In fact, this approach dismisses reconstruction of objects. These features result in low computational costs and make it possible to run the proposed algorithm in real-time. Finally, the performance of the approach is successfully validated by applying it to the scenes with both single and multiple objects, in both simulation and real-world experiment setups

    Machine Learning for Robot Grasping and Manipulation

    Get PDF
    Robotics as a technology has an incredible potential for improving our everyday lives. Robots could perform household chores, such as cleaning, cooking, and gardening, in order to give us more time for other pursuits. Robots could also be used to perform tasks in hazardous environments, such as turning off a valve in an emergency or safely sorting our more dangerous trash. However, all of these applications would require the robot to perform manipulation tasks with various objects. Today's robots are used primarily for performing specialized tasks in controlled scenarios, such as manufacturing. The robots that are used in today's applications are typically designed for a single purpose and they have been preprogrammed with all of the necessary task information. In contrast, a robot working in a more general environment will often be confronted with new objects and scenarios. Therefore, in order to reach their full potential as autonomous physical agents, robots must be capable of learning versatile manipulation skills for different objects and situations. Hence, we have worked on a variety of manipulation skills to improve those capabilities of robots, and the results have lead to several new approaches, which are presented in this thesis Learning manipulation skills is, however, an open problem with many challenges that still need to be overcome. The first challenge is to acquire and improve manipulation skills with little to no human supervision. Rather than being preprogrammed, the robot should be able to learn from human demonstrations and through physical interactions with objects. Learning to improve skills through trial and error learning is a particularly important ability for an autonomous robot, as it allows the robot to handle new situations. This ability also removes the burden from the human demonstrator to teach a skill perfectly, as a robot is allowed to make mistakes if it can learn from them. In order to address this challenge, we present a continuum-armed bandits approach for learning to grasp objects. The robot learns to predict the performances of different grasps, as well as how certain it is of this prediction, and selects grasps accordingly. As the robot tries more grasps, its predictions become more accurate, and its grasps improve accordingly. A robot can master a manipulation skill by learning from different objects in various scenarios. Another fundamental challenge is therefore to efficiently generalize manipulations between different scenarios. Rather than relearning from scratch, the robot should find similarities between the current situation and previous scenarios in order to reuse manipulation skills and task information. For example, the robot can learn to adapt manipulation skills to new objects by finding similarities between them and known objects. However, only some similarities between objects will be relevant for a given manipulation. The robot must therefore also learn which similarities are important for adapting the manipulation skill. We present two object representations for generalizing between different situations. Contacts between objects are important for many manipulations, but it is difficult to define general features for representing sets of contacts. Instead, we define a kernel function for comparing contact distributions, which allows the robot to use kernel methods for learning manipulations. The second approach is to use warped parameters to define more abstract features, such as areas and volumes. These features are defined as functions of known object models. The robot can compute these parameters for novel objects by warping the shape of the known object to match the unknown object. Learning about objects also requires the robot to reconcile information from multiple sensor modalities, including touch, hearing, and vision. While some object properties will only be observed by specific sensor modalities, other object properties can be determined from multiple sensor modalities. For example, while color can only be determined by vision, the shape of an object can be observed using vision or touch. The robot should use information from all of its senses in order to quickly learn about objects. We explain how the robot can learn low-dimensional representations of tactile data by incorporating cues from vision data. As touching an object usually occludes the surface, the proposed method was designed to work with weak pairings between the data in the two sensor modalities. The robot can also learn more efficiently if it reuses skills between different tasks. Rather than relearn a skill for each new task, the robot should learn manipulation skills that can be reused for multiple tasks. For an autonomous robot, this would require the robot to divide tasks into smaller steps. Dividing tasks into smaller parts makes it easier to learn the corresponding skills. If a step is a part of many tasks, then the robot will have more opportunities to practice the associated skill, and more tasks will benefit from the resulting performance improvement. In order to learn a set of useful subtasks, we propose a probabilistic model for dividing manipulations into phases. This model captures the conditions for transitioning between different phases, which represent subgoals and constraints of the overall tasks. The robot can use the model together with model-based reinforcement learning in order to learn skills for moving between phases. When confronted with a new task, the robot will have to select a suitable sequence of skills to execute. The robot must therefore also learn to select which manipulation to execute in the current scenario. Selecting sequences of motor primitives is difficult, as the robot must take into consideration the current task, state, and future actions when selecting the next motor skill to execute. We therefore present a value function method for selecting skills in an optimal manner. The robot learns the value function for the continuous state space using a flexible non-parametric model-based approach. Learning manipulation skills also poses certain challenges for learning methods. The robot will not have thousands of samples when learning a new manipulation skill, and must instead actively collect new samples or use data from similar scenarios. The learning methods presented in this thesis are, therefore, designed to work with relatively small amounts of data, and can generally be used during the learning process. Manipulation tasks also present a spectrum of different problem types. Hence, we present supervised, unsupervised, and reinforcement learning approaches in order to address the diverse challenges of learning manipulations skills

    Space Exploration Robotic Systems - Orbital Manipulation Mechanisms

    Get PDF
    In the future, orbital space robots will assist humans in space by constructing and maintaining space modules and structures. Robotic manipulators will play essential roles in orbital operations. This work is devoted to the implemented designs of two different orbital manipulation mechanical grippers developed in collaboration with Thales Alenia Space Italy and NASA Jet Propulsion Laboratory – California Institute of Technology. The consensus to a study phase for an IXV (Intermediate eXperimental Vehicle) successor, a preoperational vehicle called SPACE RIDER (Space Rider Reusable Integrated Demonstrator for European Return), has been recently enlarged, as approved during last EU Ministerial Council. One of the main project task consists in developing SPACE RIDER to conduct on orbit servicing activity with no docking. SPACE RIDER would be provided with a robotic manipulator system (arm and gripper) able to transfer cargos, such as scientific payloads, from low Earth orbiting platforms to SPACE RIDER cargo bay. The platform is a part of a space tug designed to move small satellites and other payloads from Low Earth Orbit (LEO) to Geosynchronous Equatorial Orbit (GEO) and viceversa. The assumed housing cargo bay requirements in terms of volume (<100l) and mass (<50kg) combined with the required overall arm dimensions (4m length), and mass of the cargo (5-30kg) force to developing an innovative robotic manipulator with the task-oriented end effector. It results in a seven degree-of-freedom arm to ensure a high degree of dexterity and a dedicate end-effector designed to grasp the cargo interface. The gripper concept developed consists in a multi-finger hand able to lock both translational and rotational cargo degrees of freedom through an innovative underactuation strategy to limit its mass and volume. A configuration study on the cargo handle interface was performed together with some computer aided design models and multibody analysis of the whole system to prove its feasibility. Finally, the concept of system control architecture, the test report and the gripper structural analysis were defined. In order to be able to accurately analyze a sample of Martian soil and to determine if life was present on the red planet, a lot of mission concepts have been formulating to reach Mars and to bring back a terrain sample. NASA JPL has been studying such mission concepts for many years. This concept is made up of three intermediate mission accomplishments. Mars 2020 is the first mission envisioned to collect the terrain sample and to seal it in sample tubes. These sealed sample tubes could be inserted in a spherical envelope named Orbiting Sample (OS). A Mars Ascent Vehicle (MAV) is the notional rocket designed to bring this sample off Mars, and a Rendezvous Orbiting Capture System (ROCS) is the mission conceived to bring this sample back to Earth through the Earth Entry Vehicle (EEV). MOSTT is the technical work study to create new concepts able to capture and reorient an OS. This maneuver is particularly important because we do not know an OS incoming orientation and we need to be able to capture, to reorient it (2 rotational degrees of freedom), and to retain an OS (3 translational degrees of freedom and 2 rotational ones). Planetary protection requirements generate a need to enclose an OS in two shells and to seal it through a process called Break-The-Chain (BTC). Considering the EEV would return back to Earth, the tubes orientation and position have to be known in detail to prevent any possible damage during the Earth hard landing (acceleration of ∼1300g). Tests and analysis report that in order for the hermetic seals of the sample tubes to survive the impact, they should be located above an OS equator. Due to other system uncertainties an OS presents the potential requirement to be properly reoriented before being inserted inside the EEV. Planetary protection issues and landing safety are critical mission points and provide potential strict requirements to MOSTT system configuration. This task deals with the concept, design, and testbed realization of an innovative electro-mechanical system to reorient an OS consistent with all the necessary potential requirements. One of these electro-mechanical systems consists of a controlled-motorized wiper that explores all an OS surface until it engages with a pin on an OS surface and brings it to the final home location reorienting an OS. This mechanism is expected to be robust to the incoming OS orientation and to reorient it to the desired position using only one degree of freedom rotational actuator

    Generative and predictive models for robust manipulation

    Get PDF
    Probabilistic modelling of manipulation skills, perception and uncertainty pose many challenges at different stages of a typical robot manipulation pipeline. This thesis is about devising algorithms and strategies for improving robustness in object manipulation skills acquired from demonstration and derived from learnt physical models in non-prehensile tasks such as pushing. Manipulation skills can be made robust in different ways: first by improving time performance for grasp synthesis, second by employing active perceptual strategies that exploit generated grasp action hypothesis to more efficiently gather task-relevant information for grasp generation, and finally via exploiting predictive uncertainty in learnt physical models. Hence, robust manipulation skills emerge from the interplay of a triad of capabilities: generative modelling for action synthesis, active perception, and finally learning and exploiting uncertainty in physical interactions. This thesis addresses these problems by • Showing how parametric models for approximating multimodal distributions can be used as a computationally faster method for generative grasp synthesis. • Exploiting generative methods for dexterous grasp synthesis and investigating how active vision strategies can be applied to improve grasp execution safety, success rate, and utilise fewer camera views of an object for grasp generation. • Outlining methods to model and exploit predictive uncertainty from learnt forward models to achieve robust, uncertainty-averse non-prehensile manipulation, such as push manipulation. In particular, the thesis: (i) presents a framework for generative grasp synthesis with applications for real-time grasp synthesis suitable for multi-fingered robot hands; (ii) describes a sensorisation method for under-actuated hands, such as the Pisa/IIT SoftHand, which allows us to deploy the aforementioned grasp synthesis framework to this type of robotic hand; (iii) provides an active vision approach for view selection that makes use of generative grasp synthesis methods to perform perceptual predictions in order to leverage grasp performance, taking into account grasp execution safety and contact information; and (iv) finally, going beyond prehensile skills, provides an approach to model and exploit predictive uncertainty from learnt physics applied to push manipulation. Experimental results are presented in simulation and on real robot platforms to validate the proposed methods

    Extreem geweld op/in scène: getuigen tussen geschiedenis en herinnering

    Get PDF
    Extreme violence shows itself. It bursts through the screens. It surfs from one style and medium to another: news reports, documentaries, fiction, arts of all kinds. Yet theatre distinguishes itself from this mêlée all while constantly returning to the subject. Differently. Linked from its origins to the representation of cruelty and having "miraculously" escaped the often sterile polemics on the interdiction (or not)... of representing the Holocaust, it is still with the same youthfulness that theatre deals with extreme violence today, relentlessly pursuing the articulation of ethics and aesthetics
    corecore