1,394 research outputs found

    Autonomous Tissue Scanning under Free-Form Motion for Intraoperative Tissue Characterisation

    Full text link
    In Minimally Invasive Surgery (MIS), tissue scanning with imaging probes is required for subsurface visualisation to characterise the state of the tissue. However, scanning of large tissue surfaces in the presence of deformation is a challenging task for the surgeon. Recently, robot-assisted local tissue scanning has been investigated for motion stabilisation of imaging probes to facilitate the capturing of good quality images and reduce the surgeon's cognitive load. Nonetheless, these approaches require the tissue surface to be static or deform with periodic motion. To eliminate these assumptions, we propose a visual servoing framework for autonomous tissue scanning, able to deal with free-form tissue deformation. The 3D structure of the surgical scene is recovered and a feature-based method is proposed to estimate the motion of the tissue in real-time. A desired scanning trajectory is manually defined on a reference frame and continuously updated using projective geometry to follow the tissue motion and control the movement of the robotic arm. The advantage of the proposed method is that it does not require the learning of the tissue motion prior to scanning and can deal with free-form deformation. We deployed this framework on the da Vinci surgical robot using the da Vinci Research Kit (dVRK) for Ultrasound tissue scanning. Since the framework does not rely on information from the Ultrasound data, it can be easily extended to other probe-based imaging modalities.Comment: 7 pages, 5 figures, ICRA 202

    Assistance strategies for robotized laparoscopy

    Get PDF
    Robotizing laparoscopic surgery not only allows achieving better accuracy to operate when a scale factor is applied between master and slave or thanks to the use of tools with 3 DoF, which cannot be used in conventional manual surgery, but also due to additional informatic support. Relying on computer assistance different strategies that facilitate the task of the surgeon can be incorporated, either in the form of autonomous navigation or cooperative guidance, providing sensory or visual feedback, or introducing certain limitations of movements. This paper describes different ways of assistance aimed at improving the work capacity of the surgeon and achieving more safety for the patient, and the results obtained with the prototype developed at UPC.Peer ReviewedPostprint (author's final draft

    Microscope Embedded Neurosurgical Training and Intraoperative System

    Get PDF
    In the recent years, neurosurgery has been strongly influenced by new technologies. Computer Aided Surgery (CAS) offers several benefits for patients\u27 safety but fine techniques targeted to obtain minimally invasive and traumatic treatments are required, since intra-operative false movements can be devastating, resulting in patients deaths. The precision of the surgical gesture is related both to accuracy of the available technological instruments and surgeon\u27s experience. In this frame, medical training is particularly important. From a technological point of view, the use of Virtual Reality (VR) for surgeon training and Augmented Reality (AR) for intra-operative treatments offer the best results. In addition, traditional techniques for training in surgery include the use of animals, phantoms and cadavers. The main limitation of these approaches is that live tissue has different properties from dead tissue and that animal anatomy is significantly different from the human. From the medical point of view, Low-Grade Gliomas (LGGs) are intrinsic brain tumours that typically occur in younger adults. The objective of related treatment is to remove as much of the tumour as possible while minimizing damage to the healthy brain. Pathological tissue may closely resemble normal brain parenchyma when looked at through the neurosurgical microscope. The tactile appreciation of the different consistency of the tumour compared to normal brain requires considerable experience on the part of the neurosurgeon and it is a vital point. The first part of this PhD thesis presents a system for realistic simulation (visual and haptic) of the spatula palpation of the LGG. This is the first prototype of a training system using VR, haptics and a real microscope for neurosurgery. This architecture can be also adapted for intra-operative purposes. In this instance, a surgeon needs the basic setup for the Image Guided Therapy (IGT) interventions: microscope, monitors and navigated surgical instruments. The same virtual environment can be AR rendered onto the microscope optics. The objective is to enhance the surgeon\u27s ability for a better intra-operative orientation by giving him a three-dimensional view and other information necessary for a safe navigation inside the patient. The last considerations have served as motivation for the second part of this work which has been devoted to improving a prototype of an AR stereoscopic microscope for neurosurgical interventions, developed in our institute in a previous work. A completely new software has been developed in order to reuse the microscope hardware, enhancing both rendering performances and usability. Since both AR and VR share the same platform, the system can be referred to as Mixed Reality System for neurosurgery. All the components are open source or at least based on a GPL license

    Computational ultrasound tissue characterisation for brain tumour resection

    Get PDF
    In brain tumour resection, it is vital to know where critical neurovascular structuresand tumours are located to minimise surgical injuries and cancer recurrence. Theaim of this thesis was to improve intraoperative guidance during brain tumourresection by integrating both ultrasound standard imaging and elastography in thesurgical workflow. Brain tumour resection requires surgeons to identify the tumourboundaries to preserve healthy brain tissue and prevent cancer recurrence. Thisthesis proposes to use ultrasound elastography in combination with conventionalultrasound B-mode imaging to better characterise tumour tissue during surgery.Ultrasound elastography comprises a set of techniques that measure tissue stiffness,which is a known biomarker of brain tumours. The objectives of the researchreported in this thesis are to implement novel learning-based methods for ultrasoundelastography and to integrate them in an image-guided intervention framework.Accurate and real-time intraoperative estimation of tissue elasticity can guide towardsbetter delineation of brain tumours and improve the outcome of neurosurgery. We firstinvestigated current challenges in quasi-static elastography, which evaluates tissuedeformation (strain) by estimating the displacement between successive ultrasoundframes, acquired before and after applying manual compression. Recent approachesin ultrasound elastography have demonstrated that convolutional neural networkscan capture ultrasound high-frequency content and produce accurate strain estimates.We proposed a new unsupervised deep learning method for strain prediction, wherethe training of the network is driven by a regularised cost function, composed of asimilarity metric and a regularisation term that preserves displacement continuityby directly optimising the strain smoothness. We further improved the accuracy of our method by proposing a recurrent network architecture with convolutional long-short-term memory decoder blocks to improve displacement estimation and spatio-temporal continuity between time series ultrasound frames. We then demonstrateinitial results towards extending our ultrasound displacement estimation method toshear wave elastography, which provides a quantitative estimation of tissue stiffness.Furthermore, this thesis describes the development of an open-source image-guidedintervention platform, specifically designed to combine intra-operative ultrasoundimaging with a neuronavigation system and perform real-time ultrasound tissuecharacterisation. The integration was conducted using commercial hardware andvalidated on an anatomical phantom. Finally, preliminary results on the feasibilityand safety of the use of a novel intraoperative ultrasound probe designed for pituitarysurgery are presented. Prior to the clinical assessment of our image-guided platform,the ability of the ultrasound probe to be used alongside standard surgical equipmentwas demonstrated in 5 pituitary cases

    Neurosurgery and brain shift: review of the state of the art and main contributions of robotics

    Get PDF
    Este artículo presenta una revisión acerca de la neurocirugía, los asistentes robóticos en este tipo de procedimiento, y el tratamiento que se le da al problema del desplazamiento que sufre el tejido cerebral, incluyendo las técnicas para la obtención de imágenes médicas. Se abarca de manera especial el fenómeno del desplazamiento cerebral, comúnmente conocido como brain shift, el cual causa pérdida de referencia entre las imágenes preoperatorias y los volúmenes a tratar durante la cirugía guiada por imágenes médicas. Hipotéticamente, con la predicción y corrección del brain shift sobre el sistema de neuronavegación, se podrían planear y seguir trayectorias de mínima invasión, lo que conllevaría a minimizar el daño a los tejidos funcionales y posiblemente a reducir la morbilidad y mortalidad en estos delicados y exigentes procedimientos médicos, como por ejemplo, en la extirpación de un tumor cerebral. Se mencionan también otros inconvenientes asociados a la neurocirugía y se muestra cómo los sistemas robotizados han ayudado a solventar esta problemática. Finalmente se ponen en relieve las perspectivas futuras de esta rama de la medicina, la cual desde muchas disciplinas busca tratar las dolencias del principal órgano del ser humano.This paper presents a review about neurosurgery, robotic assistants in this type of procedure, and the approach to the problem of brain tissue displacement, including techniques for obtaining medical images. It is especially focused on the phenomenon of brain displacement, commonly known as brain shift, which causes a loss of reference between the preoperative images and the volumes to be treated during image-guided surgery. Hypothetically, with brain shift prediction and correction for the neuronavigation system, minimal invasion trajectories could be planned and shortened. This would reduce damage to functional tissues and possibly lower the morbidity and mortality in delicate and demanding medical procedures such as the removal of a brain tumor. This paper also mentions other issues associated with neurosurgery and shows the way robotized systems have helped solve these problems. Finally, it highlights the future perspectives of neurosurgery, a branch of medicine that seeks to treat the ailments of the main organ of the human body from the perspective of many disciplines

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Estimation of interaction forces in robotic surgery using a semi-supervised deep neural network model

    Get PDF
    Providing force feedback as a feature in current Robot-Assisted Minimally Invasive Surgery systems still remains a challenge. In recent years, Vision-Based Force Sensing (VBFS) has emerged as a promising approach to address this problem. Existing methods have been developed in a Supervised Learning (SL) setting. Nonetheless, most of the video sequences related to robotic surgery are not provided with ground-truth force data, which can be easily acquired in a controlled environment. A powerful approach to process unlabeled video sequences and find a compact representation for each video frame relies on using an Unsupervised Learning (UL) method. Afterward, a model trained in an SL setting can take advantage of the available ground-truth force data. In the present work, UL and SL techniques are used to investigate a model in a Semi-Supervised Learning (SSL) framework, consisting of an encoder network and a Long-Short Term Memory (LSTM) network. First, a Convolutional Auto-Encoder (CAE) is trained to learn a compact representation for each RGB frame in a video sequence. To facilitate the reconstruction of high and low frequencies found in images, this CAE is optimized using an adversarial framework and a L1-loss, respectively. Thereafter, the encoder network of the CAE is serially connected with an LSTM network and trained jointly to minimize the difference between ground-truth and estimated force data. Datasets addressing the force estimation task are scarce. Therefore, the experiments have been validated in a custom dataset. The results suggest that the proposed approach is promising.Peer ReviewedPostprint (author's final draft

    Intraoperative Navigation Systems for Image-Guided Surgery

    Get PDF
    Recent technological advancements in medical imaging equipment have resulted in a dramatic improvement of image accuracy, now capable of providing useful information previously not available to clinicians. In the surgical context, intraoperative imaging provides a crucial value for the success of the operation. Many nontrivial scientific and technical problems need to be addressed in order to efficiently exploit the different information sources nowadays available in advanced operating rooms. In particular, it is necessary to provide: (i) accurate tracking of surgical instruments, (ii) real-time matching of images from different modalities, and (iii) reliable guidance toward the surgical target. Satisfying all of these requisites is needed to realize effective intraoperative navigation systems for image-guided surgery. Various solutions have been proposed and successfully tested in the field of image navigation systems in the last ten years; nevertheless several problems still arise in most of the applications regarding precision, usability and capabilities of the existing systems. Identifying and solving these issues represents an urgent scientific challenge. This thesis investigates the current state of the art in the field of intraoperative navigation systems, focusing in particular on the challenges related to efficient and effective usage of ultrasound imaging during surgery. The main contribution of this thesis to the state of the art are related to: Techniques for automatic motion compensation and therapy monitoring applied to a novel ultrasound-guided surgical robotic platform in the context of abdominal tumor thermoablation. Novel image-fusion based navigation systems for ultrasound-guided neurosurgery in the context of brain tumor resection, highlighting their applicability as off-line surgical training instruments. The proposed systems, which were designed and developed in the framework of two international research projects, have been tested in real or simulated surgical scenarios, showing promising results toward their application in clinical practice

    Advancing Intra-operative Precision: Dynamic Data-Driven Non-Rigid Registration for Enhanced Brain Tumor Resection in Image-Guided Neurosurgery

    Full text link
    During neurosurgery, medical images of the brain are used to locate tumors and critical structures, but brain tissue shifts make pre-operative images unreliable for accurate removal of tumors. Intra-operative imaging can track these deformations but is not a substitute for pre-operative data. To address this, we use Dynamic Data-Driven Non-Rigid Registration (NRR), a complex and time-consuming image processing operation that adjusts the pre-operative image data to account for intra-operative brain shift. Our review explores a specific NRR method for registering brain MRI during image-guided neurosurgery and examines various strategies for improving the accuracy and speed of the NRR method. We demonstrate that our implementation enables NRR results to be delivered within clinical time constraints while leveraging Distributed Computing and Machine Learning to enhance registration accuracy by identifying optimal parameters for the NRR method. Additionally, we highlight challenges associated with its use in the operating room
    corecore