250 research outputs found

    Whole-Hand Robotic Manipulation with Rolling, Sliding, and Caging

    Get PDF
    Traditional manipulation planning and modeling relies on strong assumptions about contact. Specifically, it is common to assume that contacts are fixed and do not slide. This assumption ensures that objects are stably grasped during every step of the manipulation, to avoid ejection. However, this assumption limits achievable manipulation to the feasible motion of the closed-loop kinematic chains formed by the object and fingers. To improve manipulation capability, it has been shown that relaxing contact constraints and allowing sliding can enhance dexterity. But in order to safely manipulate with shifting contacts, other safeguards must be used to protect against ejection. “Caging manipulation,” in which the object is geometrically trapped by the fingers, can be employed to guarantee that an object never leaves the hand, regardless of constantly changing contact conditions. Mechanical compliance and underactuated joint coupling, or carefully chosen design parameters, can be used to passively create a caging grasp – protecting against accidental ejection – while simultaneously manipulating with all parts of the hand. And with passive ejection avoidance, hand control schemes can be made very simple, while still accomplishing manipulation. In place of complex control, better design can be used to improve manipulation capability—by making smart choices about parameters such as phalanx length, joint stiffness, joint coupling schemes, finger frictional properties, and actuator mode of operation. I will present an approach for modeling fully actuated and underactuated whole-hand-manipulation with shifting contacts, show results demonstrating the relationship between design parameters and manipulation metrics, and show how this can produce highly dexterous manipulators

    Towards a Realistic and Self-Contained Biomechanical Model of the Hand

    Get PDF

    Workshop on "Robotic assembly of 3D MEMS".

    No full text
    Proceedings of a workshop proposed in IEEE IROS'2007.The increase of MEMS' functionalities often requires the integration of various technologies used for mechanical, optical and electronic subsystems in order to achieve a unique system. These different technologies have usually process incompatibilities and the whole microsystem can not be obtained monolithically and then requires microassembly steps. Microassembly of MEMS based on micrometric components is one of the most promising approaches to achieve high-performance MEMS. Moreover, microassembly also permits to develop suitable MEMS packaging as well as 3D components although microfabrication technologies are usually able to create 2D and "2.5D" components. The study of microassembly methods is consequently a high stake for MEMS technologies growth. Two approaches are currently developped for microassembly: self-assembly and robotic microassembly. In the first one, the assembly is highly parallel but the efficiency and the flexibility still stay low. The robotic approach has the potential to reach precise and reliable assembly with high flexibility. The proposed workshop focuses on this second approach and will take a bearing of the corresponding microrobotic issues. Beyond the microfabrication technologies, performing MEMS microassembly requires, micromanipulation strategies, microworld dynamics and attachment technologies. The design and the fabrication of the microrobot end-effectors as well as the assembled micro-parts require the use of microfabrication technologies. Moreover new micromanipulation strategies are necessary to handle and position micro-parts with sufficiently high accuracy during assembly. The dynamic behaviour of micrometric objects has also to be studied and controlled. Finally, after positioning the micro-part, attachment technologies are necessary

    Objekt-Manipulation und Steuerung der Greifkraft durch Verwendung von Taktilen Sensoren

    Get PDF
    This dissertation describes a new type of tactile sensor and an improved version of the dynamic tactile sensing approach that can provide a regularly updated and accurate estimate of minimum applied forces for use in the control of gripper manipulation. The pre-slip sensing algorithm is proposed and implemented into two-finger robot gripper. An algorithm that can discriminate between types of contact surface and recognize objects at the contact stage is also proposed. A technique for recognizing objects using tactile sensor arrays, and a method based on the quadric surface parameter for classifying grasped objects is described. Tactile arrays can recognize surface types on contact, making it possible for a tactile system to recognize translation, rotation, and scaling of an object independently.Diese Dissertation beschreibt eine neue Art von taktilen Sensoren und einen verbesserten Ansatz zur dynamischen Erfassung von taktilen daten, der in regelmĂ€ĂŸigen ZeitabstĂ€nden eine genaue Bewertung der minimalen Greifkraft liefert, die zur Steuerung des Greifers nötig ist. Ein Berechnungsverfahren zur Voraussage des Schlupfs, das in einen Zwei-Finger-Greifarm eines Roboters eingebaut wurde, wird vorgestellt. Auch ein Algorithmus zur Unterscheidung von verschiedenen OberflĂ€chenarten und zur Erkennung von Objektformen bei der BerĂŒhrung wird vorgestellt. Ein Verfahren zur Objekterkennung mit Hilfe einer Matrix aus taktilen Sensoren und eine Methode zur Klassifikation ergriffener Objekte, basierend auf den Daten einer rechteckigen OberflĂ€che, werden beschrieben. Mit Hilfe dieser Matrix können unter schiedliche Arten von OberflĂ€chen bei BerĂŒhrung erkannt werden, was es fĂŒr das Tastsystem möglich macht, Verschiebung, Drehung und GrĂ¶ĂŸe eines Objektes unabhĂ€ngig voneinander zu erkennen

    Learning-based robotic manipulation for dynamic object handling : a thesis presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Mechatronic Engineering at the School of Food and Advanced Technology, Massey University, Turitea Campus, Palmerston North, New Zealand

    Get PDF
    Figures are re-used in this thesis with permission of their respective publishers or under a Creative Commons licence.Recent trends have shown that the lifecycles and production volumes of modern products are shortening. Consequently, many manufacturers subject to frequent change prefer flexible and reconfigurable production systems. Such schemes are often achieved by means of manual assembly, as conventional automated systems are perceived as lacking flexibility. Production lines that incorporate human workers are particularly common within consumer electronics and small appliances. Artificial intelligence (AI) is a possible avenue to achieve smart robotic automation in this context. In this research it is argued that a robust, autonomous object handling process plays a crucial role in future manufacturing systems that incorporate robotics—key to further closing the gap between manual and fully automated production. Novel object grasping is a difficult task, confounded by many factors including object geometry, weight distribution, friction coefficients and deformation characteristics. Sensing and actuation accuracy can also significantly impact manipulation quality. Another challenge is understanding the relationship between these factors, a specific grasping strategy, the robotic arm and the employed end-effector. Manipulation has been a central research topic within robotics for many years. Some works focus on design, i.e. specifying a gripper-object interface such that the effects of imprecise gripper placement and other confounding control-related factors are mitigated. Many universal robotic gripper designs have been considered, including 3-fingered gripper designs, anthropomorphic grippers, granular jamming end-effectors and underactuated mechanisms. While such approaches have maintained some interest, contemporary works predominantly utilise machine learning in conjunction with imaging technologies and generic force-closure end-effectors. Neural networks that utilise supervised and unsupervised learning schemes with an RGB or RGB-D input make up the bulk of publications within this field. Though many solutions have been studied, automatically generating a robust grasp configuration for objects not known a priori, remains an open-ended problem. An element of this issue relates to a lack of objective performance metrics to quantify the effectiveness of a solution—which has traditionally driven the direction of community focus by highlighting gaps in the state-of-the-art. This research employs monocular vision and deep learning to generate—and select from—a set of hypothesis grasps. A significant portion of this research relates to the process by which a final grasp is selected. Grasp synthesis is achieved by sampling the workspace using convolutional neural networks trained to recognise prospective grasp areas. Each potential pose is evaluated by the proposed method in conjunction with other input modalities—such as load-cells and an alternate perspective. To overcome human bias and build upon traditional metrics, scores are established to objectively quantify the quality of an executed grasp trial. Learning frameworks that aim to maximise for these scores are employed in the selection process to improve performance. The proposed methodology and associated metrics are empirically evaluated. A physical prototype system was constructed, employing a Dobot Magician robotic manipulator, vision enclosure, imaging system, conveyor, sensing unit and control system. Over 4,000 trials were conducted utilising 100 objects. Experimentation showed that robotic manipulation quality could be improved by 10.3% when selecting to optimise for the proposed metrics—quantified by a metric related to translational error. Trials further demonstrated a grasp success rate of 99.3% for known objects and 98.9% for objects for which a priori information is unavailable. For unknown objects, this equated to an improvement of approximately 10% relative to other similar methodologies in literature. A 5.3% reduction in grasp rate was observed when removing the metrics as selection criteria for the prototype system. The system operated at approximately 1 Hz when contemporary hardware was employed. Experimentation demonstrated that selecting a grasp pose based on the proposed metrics improved grasp rates by up to 4.6% for known objects and 2.5% for unknown objects—compared to selecting for grasp rate alone. This project was sponsored by the Richard and Mary Earle Technology Trust, the Ken and Elizabeth Powell Bursary and the Massey University Foundation. Without the financial support provided by these entities, it would not have been possible to construct the physical robotic system used for testing and experimentation. This research adds to the field of robotic manipulation, contributing to topics on grasp-induced error analysis, post-grasp error minimisation, grasp synthesis framework design and general grasp synthesis. Three journal publications and one IEEE Xplore paper have been published as a result of this research

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Computing gripping points in 2D parallel surfaces via polygon clipping

    Get PDF

    Annals of Scientific Society for Assembly, Handling and Industrial Robotics 2021

    Get PDF
    This Open Access proceedings presents a good overview of the current research landscape of assembly, handling and industrial robotics. The objective of MHI Colloquium is the successful networking at both academic and management level. Thereby, the colloquium focuses an academic exchange at a high level in order to distribute the obtained research results, to determine synergy effects and trends, to connect the actors in person and in conclusion, to strengthen the research field as well as the MHI community. In addition, there is the possibility to become acquatined with the organizing institute. Primary audience is formed by members of the scientific society for assembly, handling and industrial robotics (WGMHI)

    Grasping, Perching, And Visual Servoing For Micro Aerial Vehicles

    Get PDF
    Micro Aerial Vehicles (MAVs) have seen a dramatic growth in the consumer market because of their ability to provide new vantage points for aerial photography and videography. However, there is little consideration for physical interaction with the environment surrounding them. Onboard manipulators are absent, and onboard perception, if existent, is used to avoid obstacles and maintain a minimum distance from them. There are many applications, however, which would benefit greatly from aerial manipulation or flight in close proximity to structures. This work is focused on facilitating these types of close interactions between quadrotors and surrounding objects. We first explore high-speed grasping, enabling a quadrotor to quickly grasp an object while moving at a high relative velocity. Next, we discuss planning and control strategies, empowering a quadrotor to perch on vertical surfaces using a downward-facing gripper. Then, we demonstrate that such interactions can be achieved using only onboard sensors by incorporating vision-based control and vision-based planning. In particular, we show how a quadrotor can use a single camera and an Inertial Measurement Unit (IMU) to perch on a cylinder. Finally, we generalize our approach to consider objects in motion, and we present relative pose estimation and planning, enabling tracking of a moving sphere using only an onboard camera and IMU
    • 

    corecore