3 research outputs found

    Automation of tissue piercing using circular needles and vision guidance for computer aided laparoscopic surgery

    Full text link
    Abstract—Despite the fact that minimally invasive robotic surgery provides many advantages for patients, such as reduced tissue trauma and shorter hospitalization, complex tasks (e.g. tissue piercing or knot-tying) are still time-consuming, error-prone and lead to quicker fatigue of the surgeon. Automating these recurrent tasks could greatly reduce total surgery time for patients and disburden the surgeon while he can focus on higher level challenges. This work tackles the problem of autonomous tissue piercing in robot-assisted laparoscopic surgery with a circular needle and general purpose surgical instruments. To command the instruments to an incision point, the surgeon utilizes a laser pointer to indicate the stitching area. A precise positioning of the needle is obtained by means of a switching visual servoing approach and the subsequent stitch is performed in a circular motion. Index Terms—robot surgery, minimally invasive surgery, tissue piercing, visual servoing I

    Robot learning from demonstration of force-based manipulation tasks

    Get PDF
    One of the main challenges in Robotics is to develop robots that can interact with humans in a natural way, sharing the same dynamic and unstructured environments. Such an interaction may be aimed at assisting, helping or collaborating with a human user. To achieve this, the robot must be endowed with a cognitive system that allows it not only to learn new skills from its human partner, but also to refine or improve those already learned. In this context, learning from demonstration appears as a natural and userfriendly way to transfer knowledge from humans to robots. This dissertation addresses such a topic and its application to an unexplored field, namely force-based manipulation tasks learning. In this kind of scenarios, force signals can convey data about the stiffness of a given object, the inertial components acting on a tool, a desired force profile to be reached, etc. Therefore, if the user wants the robot to learn a manipulation skill successfully, it is essential that its cognitive system is able to deal with force perceptions. The first issue this thesis tackles is to extract the input information that is relevant for learning the task at hand, which is also known as the what to imitate? problem. Here, the proposed solution takes into consideration that the robot actions are a function of sensory signals, in other words the importance of each perception is assessed through its correlation with the robot movements. A Mutual Information analysis is used for selecting the most relevant inputs according to their influence on the output space. In this way, the robot can gather all the information coming from its sensory system, and the perception selection module proposed here automatically chooses the data the robot needs to learn a given task. Having selected the relevant input information for the task, it is necessary to represent the human demonstrations in a compact way, encoding the relevant characteristics of the data, for instance, sequential information, uncertainty, constraints, etc. This issue is the next problem addressed in this thesis. Here, a probabilistic learning framework based on hidden Markov models and Gaussian mixture regression is proposed for learning force-based manipulation skills. The outstanding features of such a framework are: (i) it is able to deal with the noise and uncertainty of force signals because of its probabilistic formulation, (ii) it exploits the sequential information embedded in the model for managing perceptual aliasing and time discrepancies, and (iii) it takes advantage of task variables to encode those force-based skills where the robot actions are modulated by an external parameter. Therefore, the resulting learning structure is able to robustly encode and reproduce different manipulation tasks. After, this thesis goes a step forward by proposing a novel whole framework for learning impedance-based behaviors from demonstrations. The key aspects here are that this new structure merges vision and force information for encoding the data compactly, and it allows the robot to have different behaviors by shaping its compliance level over the course of the task. This is achieved by a parametric probabilistic model, whose Gaussian components are the basis of a statistical dynamical system that governs the robot motion. From the force perceptions, the stiffness of the springs composing such a system are estimated, allowing the robot to shape its compliance. This approach permits to extend the learning paradigm to other fields different from the common trajectory following. The proposed frameworks are tested in three scenarios, namely, (a) the ball-in-box task, (b) drink pouring, and (c) a collaborative assembly, where the experimental results evidence the importance of using force perceptions as well as the usefulness and strengths of the methods

    Human-machine skill transfer extended by a scaffolding framework

    No full text
    Abstract—The term scaffolding, with respect to human education, was first coined in the 1970ies, although the basic concept originates back to the 1930ies. The main idea is to formalize the superior knowledge of a teacher in a certain way to generate support for a trainee. In practice, this concept can be implemented as concrete as a cloze, which assists pupils as a social environment, which facilitates learning of specific tasks. This paper introduces a novel approach towards robotic learning by means of such a scaffolding framework. In this case, the scaffolding is constituted by abstract patterns, which facilitate the structuring and segmentation of information during “Learning by Demonstration”. The methodology was applied to a real-world scenario of robot-assisted surgery. Index Terms—learning by demonstration, scaffolding, situ-ated learning I
    corecore