220 research outputs found

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Medical Image Registration Using Deep Neural Networks

    Get PDF
    Registration is a fundamental problem in medical image analysis wherein images are transformed spatially to align corresponding anatomical structures in each image. Recently, the development of learning-based methods, which exploit deep neural networks and can outperform classical iterative methods, has received considerable interest from the research community. This interest is due in part to the substantially reduced computational requirements that learning-based methods have during inference, which makes them particularly well-suited to real-time registration applications. Despite these successes, learning-based methods can perform poorly when applied to images from different modalities where intensity characteristics can vary greatly, such as in magnetic resonance and ultrasound imaging. Moreover, registration performance is often demonstrated on well-curated datasets, closely matching the distribution of the training data. This makes it difficult to determine whether demonstrated performance accurately represents the generalization and robustness required for clinical use. This thesis presents learning-based methods which address the aforementioned difficulties by utilizing intuitive point-set-based representations, user interaction and meta-learning-based training strategies. Primarily, this is demonstrated with a focus on the non-rigid registration of 3D magnetic resonance imaging to sparse 2D transrectal ultrasound images to assist in the delivery of targeted prostate biopsies. While conventional systematic prostate biopsy methods can require many samples to be taken to confidently produce a diagnosis, tumor-targeted approaches have shown improved patient, diagnostic, and disease management outcomes with fewer samples. However, the available intraoperative transrectal ultrasound imaging alone is insufficient for accurate targeted guidance. As such, this exemplar application is used to illustrate the effectiveness of sparse, interactively-acquired ultrasound imaging for real-time, interventional registration. The presented methods are found to improve registration accuracy, relative to state-of-the-art, with substantially lower computation time and require a fraction of the data at inference. As a result, these methods are particularly attractive given their potential for real-time registration in interventional applications

    Book of Abstracts 15th International Symposium on Computer Methods in Biomechanics and Biomedical Engineering and 3rd Conference on Imaging and Visualization

    Get PDF
    In this edition, the two events will run together as a single conference, highlighting the strong connection with the Taylor & Francis journals: Computer Methods in Biomechanics and Biomedical Engineering (John Middleton and Christopher Jacobs, Eds.) and Computer Methods in Biomechanics and Biomedical Engineering: Imaging and Visualization (JoãoManuel R.S. Tavares, Ed.). The conference has become a major international meeting on computational biomechanics, imaging andvisualization. In this edition, the main program includes 212 presentations. In addition, sixteen renowned researchers will give plenary keynotes, addressing current challenges in computational biomechanics and biomedical imaging. In Lisbon, for the first time, a session dedicated to award the winner of the Best Paper in CMBBE Journal will take place. We believe that CMBBE2018 will have a strong impact on the development of computational biomechanics and biomedical imaging and visualization, identifying emerging areas of research and promoting the collaboration and networking between participants. This impact is evidenced through the well-known research groups, commercial companies and scientific organizations, who continue to support and sponsor the CMBBE meeting series. In fact, the conference is enriched with five workshops on specific scientific topics and commercial software.info:eu-repo/semantics/draf

    Patient-specific simulation environment for surgical planning and preoperative rehearsal

    Get PDF
    Surgical simulation is common practice in the fields of surgical education and training. Numerous surgical simulators are available from commercial and academic organisations for the generic modelling of surgical tasks. However, a simulation platform is still yet to be found that fulfils the key requirements expected for patient-specific surgical simulation of soft tissue, with an effective translation into clinical practice. Patient-specific modelling is possible, but to date has been time-consuming, and consequently costly, because data preparation can be technically demanding. This motivated the research developed herein, which addresses the main challenges of biomechanical modelling for patient-specific surgical simulation. A novel implementation of soft tissue deformation and estimation of the patient-specific intraoperative environment is achieved using a position-based dynamics approach. This modelling approach overcomes the limitations derived from traditional physically-based approaches, by providing a simulation for patient-specific models with visual and physical accuracy, stability and real-time interaction. As a geometrically- based method, a calibration of the simulation parameters is performed and the simulation framework is successfully validated through experimental studies. The capabilities of the simulation platform are demonstrated by the integration of different surgical planning applications that are found relevant in the context of kidney cancer surgery. The simulation of pneumoperitoneum facilitates trocar placement planning and intraoperative surgical navigation. The implementation of deformable ultrasound simulation can assist surgeons in improving their scanning technique and definition of an optimal procedural strategy. Furthermore, the simulation framework has the potential to support the development and assessment of hypotheses that cannot be tested in vivo. Specifically, the evaluation of feedback modalities, as a response to user-model interaction, demonstrates improved performance and justifies the need to integrate a feedback framework in the robot-assisted surgical setting.Open Acces

    Augmented Reality (AR) for Surgical Robotic and Autonomous Systems: State of the Art, Challenges, and Solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human–robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Estimating and understanding motion : from diagnostic to robotic surgery

    Get PDF
    Estimating and understanding motion from an image sequence is a central topic in computer vision. The high interest in this topic is because we are living in a world where many events that occur in the environment are dynamic. This makes motion estimation and understanding a natural component and a key factor in a widespread of applications including object recognition , 3D shape reconstruction, autonomous navigation and medica! diagnosis. Particularly, we focus on the medical domain in which understanding the human body for clinical purposes requires retrieving the organs' complex motion patterns, which is in general a hard problem when using only image data. In this thesis, we cope with this problem by posing the question - How to achieve a realistic motion estimation to offer a better clinical understanding? We focus this thesis on answering this question by using a variational formulation as a basis to understand one of the most complex motions in the human's body, the heart motion, through three different applications: (i) cardiac motion estimation for diagnostic, (ii) force estimation and (iii) motion prediction, both for robotic surgery. Firstly, we focus on a central topic in cardiac imaging that is the estimation of the cardiac motion. The main aim is to offer objective and understandable measures to physicians for helping them in the diagnostic of cardiovascular diseases. We employ ultrafast ultrasound data and tools for imaging motion drawn from diverse areas such as low-rank analysis and variational deformation to perform a realistic cardiac motion estimation. The significance is that by taking low-rank data with carefully chosen penalization, synergies in this complex variational problem can be created. We demonstrate how our proposed solution deals with complex deformations through careful numerical experiments using realistic and simulated data. We then move from diagnostic to robotic surgeries where surgeons perform delicate procedures remotely through robotic manipulators without directly interacting with the patients. As a result, they lack force feedback, which is an important primary sense for increasing surgeon-patient transparency and avoiding injuries and high mental workload. To solve this problem, we follow the conservation principies of continuum mechanics in which it is clear that the change in shape of an elastic object is directly proportional to the force applied. Thus, we create a variational framework to acquire the deformation that the tissues undergo due to an applied force. Then, this information is used in a learning system to find the nonlinear relationship between the given data and the applied force. We carried out experiments with in-vivo and ex-vivo data and combined statistical, graphical and perceptual analyses to demonstrate the strength of our solution. Finally, we explore robotic cardiac surgery, which allows carrying out complex procedures including Off-Pump Coronary Artery Bypass Grafting (OPCABG). This procedure avoids the associated complications of using Cardiopulmonary Bypass (CPB) since the heart is not arrested while performing the surgery on a beating heart. Thus, surgeons have to deal with a dynamic target that compromisetheir dexterity and the surgery's precision. To compensate the heart motion, we propase a solution composed of three elements: an energy function to estimate the 3D heart motion, a specular highlight detection strategy and a prediction approach for increasing the robustness of the solution. We conduct evaluation of our solution using phantom and realistic datasets. We conclude the thesis by reporting our findings on these three applications and highlight the dependency between motion estimation and motion understanding at any dynamic event, particularly in clinical scenarios.L’estimació i comprensió del moviment dins d’una seqüència d’imatges és un tema central en la visió per ordinador, el que genera un gran interès perquè vivim en un entorn ple d’esdeveniments dinàmics. Per aquest motiu és considerat com un component natural i factor clau dins d’un ampli ventall d’aplicacions, el qual inclou el reconeixement d’objectes, la reconstrucció de formes tridimensionals, la navegació autònoma i el diagnòstic de malalties. En particular, ens situem en l’àmbit mèdic en el qual la comprensió del cos humà, amb finalitats clíniques, requereix l’obtenció de patrons complexos de moviment dels òrgans. Aquesta és, en general, una tasca difícil quan s’utilitzen només dades de tipus visual. En aquesta tesi afrontem el problema plantejant-nos la pregunta - Com es pot aconseguir una estimació realista del moviment amb l’objectiu d’oferir una millor comprensió clínica? La tesi se centra en la resposta mitjançant l’ús d’una formulació variacional com a base per entendre un dels moviments més complexos del cos humà, el del cor, a través de tres aplicacions: (i) estimació del moviment cardíac per al diagnòstic, (ii) estimació de forces i (iii) predicció del moviment, orientant-se les dues últimes en cirurgia robòtica. En primer lloc, ens centrem en un tema principal en la imatge cardíaca, que és l’estimació del moviment cardíac. L’objectiu principal és oferir als metges mesures objectives i comprensibles per ajudar-los en el diagnòstic de les malalties cardiovasculars. Fem servir dades d’ultrasons ultraràpids i eines per al moviment d’imatges procedents de diverses àrees, com ara l’anàlisi de baix rang i la deformació variacional, per fer una estimació realista del moviment cardíac. La importància rau en que, en prendre les dades de baix rang amb una penalització acurada, es poden crear sinergies en aquest problema variacional complex. Mitjançant acurats experiments numèrics, amb dades realístiques i simulades, hem demostrat com les nostres propostes solucionen deformacions complexes. Després passem del diagnòstic a la cirurgia robòtica, on els cirurgians realitzen procediments delicats remotament, a través de manipuladors robòtics, sense interactuar directament amb els pacients. Com a conseqüència, no tenen la percepció de la força com a resposta, que és un sentit primari important per augmentar la transparència entre el cirurgià i el pacient, per evitar lesions i per reduir la càrrega de treball mental. Resolem aquest problema seguint els principis de conservació de la mecànica del medi continu, en els quals està clar que el canvi en la forma d’un objecte elàstic és directament proporcional a la força aplicada. Per això hem creat un marc variacional que adquireix la deformació que pateixen els teixits per l’aplicació d’una força. Aquesta informació s’utilitza en un sistema d’aprenentatge, per trobar la relació no lineal entre les dades donades i la força aplicada. Hem dut a terme experiments amb dades in-vivo i ex-vivo i hem combinat l’anàlisi estadístic, gràfic i de percepció que demostren la robustesa de la nostra solució. Finalment, explorem la cirurgia cardíaca robòtica, la qual cosa permet realitzar procediments complexos, incloent la cirurgia coronària sense bomba (off-pump coronary artery bypass grafting o OPCAB). Aquest procediment evita les complicacions associades a l’ús de circulació extracorpòria (Cardiopulmonary Bypass o CPB), ja que el cor no s’atura mentre es realitza la cirurgia. Això comporta que els cirurgians han de tractar amb un objectiu dinàmic que compromet la seva destresa i la precisió de la cirurgia. Per compensar el moviment del cor, proposem una solució composta de tres elements: un funcional d’energia per estimar el moviment tridimensional del cor, una estratègia de detecció de les reflexions especulars i una aproximació basada en mètodes de predicció, per tal d’augmentar la robustesa de la solució. L’avaluació de la nostra solució s’ha dut a terme mitjançant conjunts de dades sintètiques i realistes. La tesi conclou informant dels nostres resultats en aquestes tres aplicacions i posant de relleu la dependència entre l’estimació i la comprensió del moviment en qualsevol esdeveniment dinàmic, especialment en escenaris clínics.Postprint (published version

    Fluoroscopic Navigation for Robot-Assisted Orthopedic Surgery

    Get PDF
    Robot-assisted orthopedic surgery has gained increasing attention due to its improved accuracy and stability in minimally-invasive interventions compared to a surgeon's manual operation. An effective navigation system is critical, which estimates the intra-operative tool-to-tissue pose relationship to guide the robotic surgical device. However, most existing navigation systems use fiducial markers, such as bone pin markers, to close the calibration loop, which requires a clear line of sight and is not ideal for patients. This dissertation presents fiducial-free, fluoroscopic image-based navigation pipelines for three robot-assisted orthopedic applications: femoroplasty, core decompression of the hip, and transforaminal lumbar epidural injections. We propose custom-designed image intensity-based 2D/3D registration algorithms for pose estimation of bone anatomies, including femur and spine, and pose estimation of a rigid surgical tool and a flexible continuum manipulator. We performed system calibration and integration into a surgical robotic platform. We validated the navigation system's performance in comprehensive simulation and ex vivo cadaveric experiments. Our results suggest the feasibility of applying our proposed navigation methods for robot-assisted orthopedic applications. We also investigated machine learning approaches that can benefit the medical imaging analysis, automate the navigation component or address the registration challenges. We present a synthetic X-ray data generation pipeline called SyntheX, which enables large-scale machine learning model training. SyntheX was used to train feature detection tasks of the pelvis anatomy and the continuum manipulator, which were used to initialize the registration pipelines. Last but not least, we propose a projective spatial transformer module that learns a convex shape similarity function and extends the registration capture range. We believe that our image-based navigation solutions can benefit and inspire related orthopedic robot-assisted system designs and eventually be used in the operating rooms to improve patient outcomes

    Algorithm Selection in Multimodal Medical Image Registration

    Get PDF
    Medical image acquisition technology has improved significantly throughout the last several decades, and clinicians now rely on medical images to diagnose illnesses, and to determine treatment protocols, and surgical planning. Medical images have been divided by researchers into two types of structures: functional and anatomical. Anatomical imaging, such as magnetic resonance imaging (MRI), computed tomography imaging (C.T.), ultrasound, and other systems, enables medical personnel to examine a body internally with great accuracy, thereby avoiding the risks associated with exploratory surgery. Functional (or physiological) imaging systems contain single-photon emission computed tomography (SPECT), positron emission tomography (PET), and other methods, which refer to a medical imaging system for discovering or evaluating variations in absorption, blood flow, metabolism, and regional chemical composition. Notably, one of these medical imaging models alone cannot usually supply doctors with adequate information. Additionally, data obtained from several images of the same subject generally provide complementary information via a process called medical image registration. Image registration may be defined as the process of geometrically mapping one -image’s coordinate system to the coordinate system of another image acquired from a different perspective and with a different sensor. Registration performs a crucial role in medical image assessment because it helps clinicians observe the developing trend of the disease and make proper measures accordingly. Medical image registration (MIR) has several applications: radiation therapy, tumour diagnosis and recognition, template atlas application, and surgical guidance system. There are two types of registration: manual registration and registration-based computer system. Manual registration is when the radiologist /physician completes all registration tasks interactively with visual feedback provided by the computer system, which can result in serious problems. For instance, investigations conducted by two experts are not identical, and registration correctness is determined by the user's assessment of the relationship between anatomical features. Furthermore, it may take a long time for the user to achieve proper alignment, and the outcomes vary according to the user. As a result, the outcomes of manual alignment are doubtful and unreliable. The second registration approach is computer-based multimodal medical image registration that targets various medical images, and an arraof application types. . Additionally, automatic registration in medical pictures matches the standard recognized characteristics or voxels in pre- and intra-operative imaging without user input. Registration of multimodal pictures is the initial step in integrating data from several images. Automatic image processing has emerged to mitigate (Husein, do you mean “mitigate” or “improve”?) the manual image registration reliability, robustness, accuracy, and processing time. While such registration algorithms offer advantages when applied to some medical images, their use with others is accompanied by disadvantages. No registration technique can outperform all input datasets due to the changeability of medical imaging and the diverse demands of applications. However, no algorithm is preferable under all possible conditions; given many available algorithms, choosing the one that adapts the best to the task is vital. The essential factor is to choose which method is most appropriate for the situation. The Algorithm Selection Problem has emerged in numerous research disciplines, including medical diagnosis, machine learning, optimization, and computations. The choice of the most powerful strategy for a particular issue seeks to minimize these issues. This study delivers a universal and practical framework for multimodal registration algorithm choice. The primary goal of this study is to introduce a generic structure for constructing a medical image registration system capable of selecting the best registration process from a range of registration algorithms for various used datasets. Three strategies were constructed to examine the framework that was created. The first strategy is based on transforming the problem of algorithm selection into a classification problem. The second strategy investigates the effect of various parameters, such as optimization control points, on the optimal selection. The third strategy establishes a framework for choosing the optimal registration algorithm for a delivered dataset based on two primary criteria: registration algorithm applicability, and performance measures. The approach mentioned in this section has relied on machine learning methods and artificial neural networks to determine which candidate is most promising. Several experiments and scenarios have been conducted, and the results reveal that the novel Framework strategy leads to achieving the best performance, such as high accuracy, reliability, robustness, efficiency, and low processing time
    corecore