10,969 research outputs found

    Dynamic Active Constraints for Surgical Robots using Vector Field Inequalities

    Full text link
    Robotic assistance allows surgeons to perform dexterous and tremor-free procedures, but robotic aid is still underrepresented in procedures with constrained workspaces, such as deep brain neurosurgery and endonasal surgery. In these procedures, surgeons have restricted vision to areas near the surgical tooltips, which increases the risk of unexpected collisions between the shafts of the instruments and their surroundings. In this work, our vector-field-inequalities method is extended to provide dynamic active-constraints to any number of robots and moving objects sharing the same workspace. The method is evaluated with experiments and simulations in which robot tools have to avoid collisions autonomously and in real-time, in a constrained endonasal surgical environment. Simulations show that with our method the combined trajectory error of two robotic systems is optimal. Experiments using a real robotic system show that the method can autonomously prevent collisions between the moving robots themselves and between the robots and the environment. Moreover, the framework is also successfully verified under teleoperation with tool-tissue interactions.Comment: Accepted on T-RO 2019, 19 Page

    Visual servoing of aerial manipulators

    Get PDF
    The final publication is available at link.springer.comThis chapter describes the classical techniques to control an aerial manipulator by means of visual information and presents an uncalibrated image-based visual servo method to drive the aerial vehicle. The proposed technique has the advantage that it contains mild assumptions about the principal point and skew values of the camera, and it does not require prior knowledge of the focal length, in contrast to traditional image-based approaches.Peer ReviewedPostprint (author's final draft

    Manipulating Highly Deformable Materials Using a Visual Feedback Dictionary

    Full text link
    The complex physical properties of highly deformable materials such as clothes pose significant challenges fanipulation systems. We present a novel visual feedback dictionary-based method for manipulating defoor autonomous robotic mrmable objects towards a desired configuration. Our approach is based on visual servoing and we use an efficient technique to extract key features from the RGB sensor stream in the form of a histogram of deformable model features. These histogram features serve as high-level representations of the state of the deformable material. Next, we collect manipulation data and use a visual feedback dictionary that maps the velocity in the high-dimensional feature space to the velocity of the robotic end-effectors for manipulation. We have evaluated our approach on a set of complex manipulation tasks and human-robot manipulation tasks on different cloth pieces with varying material characteristics.Comment: The video is available at goo.gl/mDSC4

    Independent Motion Detection with Event-driven Cameras

    Full text link
    Unlike standard cameras that send intensity images at a constant frame rate, event-driven cameras asynchronously report pixel-level brightness changes, offering low latency and high temporal resolution (both in the order of micro-seconds). As such, they have great potential for fast and low power vision algorithms for robots. Visual tracking, for example, is easily achieved even for very fast stimuli, as only moving objects cause brightness changes. However, cameras mounted on a moving robot are typically non-stationary and the same tracking problem becomes confounded by background clutter events due to the robot ego-motion. In this paper, we propose a method for segmenting the motion of an independently moving object for event-driven cameras. Our method detects and tracks corners in the event stream and learns the statistics of their motion as a function of the robot's joint velocities when no independently moving objects are present. During robot operation, independently moving objects are identified by discrepancies between the predicted corner velocities from ego-motion and the measured corner velocities. We validate the algorithm on data collected from the neuromorphic iCub robot. We achieve a precision of ~ 90 % and show that the method is robust to changes in speed of both the head and the target.Comment: 7 pages, 6 figure
    • …
    corecore