300 research outputs found

    Polytopic Model Based Interaction Control for Soft Tissue Manipulation

    Get PDF
    Reliable force control is one of the key components of modern robotic teleoperation. The performance of these systems in terms of safety and stability largely depends on the controller design, as it is desired to account for various disturbing conditions, such as uncertainties of the model parameters or latency-induced problems. This work presents a polytopic qLPV model derived from a previously verified nonlinear soft tissue model, along with a model-based force control scheme that involves a tensor product polytopic state feedback controller. The derivation is based on the Tensor Product (TP) Model Transformation. The proposed force control scheme is verified and evaluated through numerical simulations. Index Terms—Soft tissue modeling, telesurgery control, Polytopic model based control, TP Model Transformation, qLPV modeling, LMI-based controller design

    Polytopic model based interaction control for soft tissue manipulation

    Full text link

    Modular instrument for a haptically-enabled robotic surgical system (HeroSurg)

    Full text link
    To restore the sense of touch in robotic surgical systems, a modular force feedback-enabled laparoscopic instrument is developed and employed in a robotic-assisted minimally invasive surgical system (HeroSurg). Strain gauge technology is incorporated into the instrument to measure tip/tissue lateral interaction forces. The modularity feature of the proposed instrument makes it interchangeable between various tip types of different functionalities, e.g., cutter, grasper, and dissector, without losing force sensing capability. Series of experiments are conducted and results are reported to evaluate force sensing capability of the instrument. The results reveal mean errors of 1.32 g and 1.98° in the measurements of tip/tissue load magnitude and direction across all experiments, respectively

    Impact of Ear Occlusion on In-Ear Sounds Generated by Intra-oral Behaviors

    Get PDF
    We conducted a case study with one volunteer and a recording setup to detect sounds induced by the actions: jaw clenching, tooth grinding, reading, eating, and drinking. The setup consisted of two in-ear microphones, where the left ear was semi-occluded with a commercially available earpiece and the right ear was occluded with a mouldable silicon ear piece. Investigations in the time and frequency domains demonstrated that for behaviors such as eating, tooth grinding, and reading, sounds could be recorded with both sensors. For jaw clenching, however, occluding the ear with a mouldable piece was necessary to enable its detection. This can be attributed to the fact that the mouldable ear piece sealed the ear canal and isolated it from the environment, resulting in a detectable change in pressure. In conclusion, our work suggests that detecting behaviors such as eating, grinding, reading with a semi-occluded ear is possible, whereas, behaviors such as clenching require the complete occlusion of the ear if the activity should be easily detectable. Nevertheless, the latter approach may limit real-world applicability because it hinders the hearing capabilities.</p

    A recurrent convolutional neural network approach for sensorless force estimation in robotic surgery

    Get PDF
    Providing force feedback as relevant information in current Robot-Assisted Minimally Invasive Surgery systems constitutes a technological challenge due to the constraints imposed by the surgical environment. In this context, force estimation techniques represent a potential solution, enabling to sense the interaction forces between the surgical instruments and soft-tissues. Specifically, if visual feedback is available for observing soft-tissues’ deformation, this feedback can be used to estimate the forces applied to these tissues. To this end, a force estimation model, based on Convolutional Neural Networks and Long-Short Term Memory networks, is proposed in this work. This model is designed to process both, the spatiotemporal information present in video sequences and the temporal structure of tool data (the surgical tool-tip trajectory and its grasping status). A series of analyses are carried out to reveal the advantages of the proposal and the challenges that remain for real applications. This research work focuses on two surgical task scenarios, referred to as pushing and pulling tissue. For these two scenarios, different input data modalities and their effect on the force estimation quality are investigated. These input data modalities are tool data, video sequences and a combination of both. The results suggest that the force estimation quality is better when both, the tool data and video sequences, are processed by the neural network model. Moreover, this study reveals the need for a loss function, designed to promote the modeling of smooth and sharp details found in force signals. Finally, the results show that the modeling of forces due to pulling tasks is more challenging than for the simplest pushing actions.Peer ReviewedPostprint (author's final draft

    Control techniques for mechatronic assisted surgery

    Get PDF
    The treatment response for traumatic head injured patients can be improved by using an autonomous robotic system to perform basic, time-critical emergency neurosurgery, reducing costs and saving lives. In this thesis, a concept for a neurosurgical robotic system is proposed to perform three specific emergency neurosurgical procedures; they are the placement of an intracranial pressure monitor, external ventricular drainage, and the evacuation of chronic subdural haematoma. The control methods for this system are investigated following a curiosity led approach. Individual problems are interpreted in the widest sense and solutions posed that are general in nature. Three main contributions result from this approach: 1) a clinical evidence based review of surgical robotics and a methodology to assist in their evaluation, 2) a new controller for soft-grasping of objects, and 3) new propositions and theorems for chatter suppression sliding mode controllers. These contributions directly assist in the design of the control system of the neurosurgical robot and, more broadly, impact other areas outside the narrow con nes of the target application. A methodology for applied research in surgical robotics is proposed. The methodology sets out a hierarchy of criteria consisting of three tiers, with the most important being the bottom tier and the least being the top tier. It is argued that a robotic system must adhere to these criteria in order to achieve acceptability. Recent commercial systems are reviewed against these criteria, and are found to conform up to at least the bottom and intermediate tiers. However, the lack of conformity to the criteria in the top tier, combined with the inability to conclusively prove increased clinical benefit, particularly symptomatic benefit, is shown to be hampering the potential of surgical robotics in gaining wide establishment. A control scheme for soft-grasping objects is presented. Grasping a soft or fragile object requires the use of minimum contact force to prevent damage or deformation. Without precise knowledge of object parameters, real-time feedback control must be used to regulate the contact force and prevent slip. Moreover, the controller must be designed to have good performance characteristics to rapidly modulate the fingertip contact force in response to a slip event. A fuzzy sliding mode controller combined with a disturbance observer is proposed for contact force control and slip prevention. The robustness of the controller is evaluated through both simulation and experiment. The control scheme was found to be effective and robust to parameter uncertainty. When tested on a real system, however, chattering phenomena, well known to sliding mode research, was induced by the unmodelled suboptimal components of the system (filtering, backlash, and time delays). This reduced the controller performance. The problem of chattering and potential solutions are explored. Real systems using sliding mode controllers, such as the control scheme for soft-grasping, have a tendency to chatter at high frequencies. This is caused by the sliding mode controller interacting with un-modelled parasitic dynamics at the actuator-input and sensor-output of the plant. As a result, new chatter-suppression sliding mode controllers have been developed, which introduce new parameters into the system. However, the effect any particular choice of parameters has on system performance is unclear, and this can make tuning the parameters to meet a set of performance criteria di cult. In this thesis, common chatter-suppression sliding mode control strategies are surveyed and simple design and estimation methods are proposed. The estimation methods predict convergence, chattering amplitude, settling time, and maximum output bounds (overshoot) using harmonic linearizations and invariant ellipsoid sets

    Advanced Strategies for Robot Manipulators

    Get PDF
    Amongst the robotic systems, robot manipulators have proven themselves to be of increasing importance and are widely adopted to substitute for human in repetitive and/or hazardous tasks. Modern manipulators are designed complicatedly and need to do more precise, crucial and critical tasks. So, the simple traditional control methods cannot be efficient, and advanced control strategies with considering special constraints are needed to establish. In spite of the fact that groundbreaking researches have been carried out in this realm until now, there are still many novel aspects which have to be explored

    Using High-Level Processing of Low-Level Signals to Actively Assist Surgeons with Intelligent Surgical Robots

    Get PDF
    Robotic surgical systems are increasingly used for minimally-invasive surgeries. As such, there is opportunity for these systems to fundamentally change the way surgeries are performed by becoming intelligent assistants rather than simply acting as the extension of surgeons' arms. As a step towards intelligent assistance, this thesis looks at ways to represent different aspects of robot-assisted surgery (RAS). We identify three main components: the robot, the surgeon actions, and the patient scene dynamics. Traditional learning algorithms in these domains are predominantly supervised methods. This has several drawbacks. First many of these domains are non-categorical, like how soft-tissue deforms. This makes labeling difficult. Second, surgeries vary greatly. Estimation of the robot state may be affected by how the robot is docked and cable tensions in the instruments. Estimation of the patient anatomy and its dynamics are often inaccurate, and in any case, may change throughout a surgery. To obtain the most accurate information, these aspects must be learned during the procedure. This limits the amount of labeling that could be done. On the surgeon side, different surgeons may perform the same procedure differently and the algorithm should provide personalized estimations for surgeons. All of these considerations motivated the use of self-supervised learning throughout this thesis. We first build a representation of the robot system. In particular, we looked at learning the dynamics model of the robot. We evaluate the model by using it to estimate forces. Once we can estimate forces in free space, we extend the algorithm to take into account patient-specific interactions, namely with the trocar and the cannula seal. Accounting for surgery-specific interactions is possible because our method does not require additional sensors and can be trained in less than five minutes, including time for data collection. Next, we use cross-modal training to understand surgeon actions by looking at the bottleneck layer when mapping video to kinematics. This should contain information about the latent space of surgeon-actions, while discarding some medium-specific information about either the video or the kinematics. Lastly, to understand the patient scene, we start with modeling interactions between a robot instrument and a soft-tissue phantom. Models are often inaccurate due to imprecise material parameters and boundary conditions, particularly in clinical scenarios. Therefore, we add a depth camera to observe deformations to correct the results of simulations. We also introduce a network that learns to simulate soft-tissue deformation from physics simulators in order to speed up the estimation. We demonstrate that self-supervised learning can be used for understanding each part of RAS. The representations it learns contain information about signals that are not directly measurable. The self-supervised nature of the methods presented in this thesis lends itself well to learning throughout a surgery. With such frameworks, we can overcome some of the main barriers to adopting learning methods in the operating room: the variety in surgery and the difficulty in labeling enough training data for each case
    corecore