9,832 research outputs found

    Active vision for dexterous grasping of novel objects

    Get PDF
    How should a robot direct active vision so as to ensure reliable grasping? We answer this question for the case of dexterous grasping of unfamiliar objects. By dexterous grasping we simply mean grasping by any hand with more than two fingers, such that the robot has some choice about where to place each finger. Such grasps typically fail in one of two ways, either unmodeled objects in the scene cause collisions or object reconstruction is insufficient to ensure that the grasp points provide a stable force closure. These problems can be solved more easily if active sensing is guided by the anticipated actions. Our approach has three stages. First, we take a single view and generate candidate grasps from the resulting partial object reconstruction. Second, we drive the active vision approach to maximise surface reconstruction quality around the planned contact points. During this phase, the anticipated grasp is continually refined. Third, we direct gaze to improve the safety of the planned reach to grasp trajectory. We show, on a dexterous manipulator with a camera on the wrist, that our approach (80.4% success rate) outperforms a randomised algorithm (64.3% success rate).Comment: IROS 2016. Supplementary video: https://youtu.be/uBSOO6tMzw

    Robots for Exploration, Digital Preservation and Visualization of Archeological Sites

    Get PDF
    Monitoring and conservation of archaeological sites are important activities necessary to prevent damage or to perform restoration on cultural heritage. Standard techniques, like mapping and digitizing, are typically used to document the status of such sites. While these task are normally accomplished manually by humans, this is not possible when dealing with hard-to-access areas. For example, due to the possibility of structural collapses, underground tunnels like catacombs are considered highly unstable environments. Moreover, they are full of radioactive gas radon that limits the presence of people only for few minutes. The progress recently made in the artificial intelligence and robotics field opened new possibilities for mobile robots to be used in locations where humans are not allowed to enter. The ROVINA project aims at developing autonomous mobile robots to make faster, cheaper and safer the monitoring of archaeological sites. ROVINA will be evaluated on the catacombs of Priscilla (in Rome) and S. Gennaro (in Naples)

    Visuo-Haptic Grasping of Unknown Objects through Exploration and Learning on Humanoid Robots

    Get PDF
    Die vorliegende Arbeit befasst sich mit dem Greifen unbekannter Objekte durch humanoide Roboter. Dazu werden visuelle Informationen mit haptischer Exploration kombiniert, um Greifhypothesen zu erzeugen. Basierend auf simulierten Trainingsdaten wird außerdem eine Greifmetrik gelernt, welche die Erfolgswahrscheinlichkeit der Greifhypothesen bewertet und die mit der größten geschätzten Erfolgswahrscheinlichkeit auswählt. Diese wird verwendet, um Objekte mit Hilfe einer reaktiven Kontrollstrategie zu greifen. Die zwei Kernbeiträge der Arbeit sind zum einen die haptische Exploration von unbekannten Objekten und zum anderen das Greifen von unbekannten Objekten mit Hilfe einer neuartigen datengetriebenen Greifmetrik

    AcTExplore: Active Tactile Exploration on Unknown Objects

    Full text link
    Tactile exploration plays a crucial role in understanding object structures for fundamental robotics tasks such as grasping and manipulation. However, efficiently exploring such objects using tactile sensors is challenging, primarily due to the large-scale unknown environments and limited sensing coverage of these sensors. To this end, we present AcTExplore, an active tactile exploration method driven by reinforcement learning for object reconstruction at scales that automatically explores the object surfaces in a limited number of steps. Through sufficient exploration, our algorithm incrementally collects tactile data and reconstructs 3D shapes of the objects as well, which can serve as a representation for higher-level downstream tasks. Our method achieves an average of 95.97% IoU coverage on unseen YCB objects while just being trained on primitive shapes. Project Webpage: https://prg.cs.umd..edu/AcTExploreComment: 8 pages, 6 figure

    Past, Present, and Future of Simultaneous Localization And Mapping: Towards the Robust-Perception Age

    Get PDF
    Simultaneous Localization and Mapping (SLAM)consists in the concurrent construction of a model of the environment (the map), and the estimation of the state of the robot moving within it. The SLAM community has made astonishing progress over the last 30 years, enabling large-scale real-world applications, and witnessing a steady transition of this technology to industry. We survey the current state of SLAM. We start by presenting what is now the de-facto standard formulation for SLAM. We then review related work, covering a broad set of topics including robustness and scalability in long-term mapping, metric and semantic representations for mapping, theoretical performance guarantees, active SLAM and exploration, and other new frontiers. This paper simultaneously serves as a position paper and tutorial to those who are users of SLAM. By looking at the published research with a critical eye, we delineate open challenges and new research issues, that still deserve careful scientific investigation. The paper also contains the authors' take on two questions that often animate discussions during robotics conferences: Do robots need SLAM? and Is SLAM solved

    3D Shape Perception from Monocular Vision, Touch, and Shape Priors

    Full text link
    Perceiving accurate 3D object shape is important for robots to interact with the physical world. Current research along this direction has been primarily relying on visual observations. Vision, however useful, has inherent limitations due to occlusions and the 2D-3D ambiguities, especially for perception with a monocular camera. In contrast, touch gets precise local shape information, though its efficiency for reconstructing the entire shape could be low. In this paper, we propose a novel paradigm that efficiently perceives accurate 3D object shape by incorporating visual and tactile observations, as well as prior knowledge of common object shapes learned from large-scale shape repositories. We use vision first, applying neural networks with learned shape priors to predict an object's 3D shape from a single-view color image. We then use tactile sensing to refine the shape; the robot actively touches the object regions where the visual prediction has high uncertainty. Our method efficiently builds the 3D shape of common objects from a color image and a small number of tactile explorations (around 10). Our setup is easy to apply and has potentials to help robots better perform grasping or manipulation tasks on real-world objects.Comment: IROS 2018. The first two authors contributed equally to this wor

    Dexterous manipulation of unknown objects using virtual contact points

    Get PDF
    The manipulation of unknown objects is a problem of special interest in robotics since it is not always possible to have exact models of the objects with which the robot interacts. This paper presents a simple strategy to manipulate unknown objects using a robotic hand equipped with tactile sensors. The hand configurations that allow the rotation of an unknown object are computed using only tactile and kinematic information, obtained during the manipulation process and reasoning about the desired and real positions of the fingertips during the manipulation. This is done taking into account that the desired positions of the fingertips are not physically reachable since they are located in the interior of the manipulated object and therefore they are virtual positions with associated virtual contact points. The proposed approach was satisfactorily validated using three fingers of an anthropomorphic robotic hand (Allegro Hand), with the original fingertips replaced by tactile sensors (WTS-FT). In the experimental validation, several everyday objects with different shapes were successfully manipulated, rotating them without the need of knowing their shape or any other physical property.Peer ReviewedPostprint (author's final draft

    In-Hand Manipulation of Unknown Objects with Tactile Sensing for Insertion

    Full text link
    In this paper, we present a method to manipulate unknown objects in-hand using tactile sensing without relying on a known object model. In many cases, vision-only approaches may not be feasible; for example, due to occlusion in cluttered spaces. We address this limitation by introducing a method to reorient unknown objects using tactile sensing. It incrementally builds a probabilistic estimate of the object shape and pose during task-driven manipulation. Our approach uses Bayesian optimization to balance exploration of the global object shape with efficient task completion. To demonstrate the effectiveness of our method, we apply it to a simulated Tactile-Enabled Roller Grasper, a gripper that rolls objects in hand while collecting tactile data. We evaluate our method on an insertion task with randomly generated objects and find that it reliably reorients objects while significantly reducing the exploration time
    corecore