710 research outputs found

    Validity and reliability of 3D marker based scapular motion analysis : a systematic review

    Get PDF
    Methods based on cutaneous markers are the most popular for the recording of three dimensional scapular motion analysis. Numerous methods have been evaluated, each showing different levels of accuracy and reliability. The aim of this review was to report the metrological properties of 3D scapular kinematic measurements using cutaneous markers and to make recommendations based on metrological evidence. A database search was conducted using relevant keywords and inclusion/exclusion criteria in 5 databases. 19 articles were included and assessed using a quality score. Concurrent validity and reliability were analyzed for each method. Six different methods are reported in the literature, each based on different marker locations and post collection computations. The acromion marker cluster (AMC) method coupled with a calibration of the scapula with the arm at rest is the most studied method. Below 90–100° of humeral elevation, this method is accurate to about 5° during arm flexion and 7° during arm abduction compared to palpation (average of the 3 scapular rotation errors). Good to excellent within-session reliability and moderate to excellent between-session reliability have been reported. The AMC method can be improved using different or multiple calibrations. Other methods using different marker locations or more markers on the scapula blade have been described but are less accurate than AMC methods. Based on current metrological evidence we would recommend (1) the use of an AMC located at the junction of the scapular spine and the acromion, (2) the use of a single calibration at rest if the task does not reach 90° of humeral elevation, (3) the use of a second calibration (at 90° or 120° of humeral elevation), or multiple calibrations above 90° of humeral elevation

    Robotic System Development for Precision MRI-Guided Needle-Based Interventions

    Get PDF
    This dissertation describes the development of a methodology for implementing robotic systems for interventional procedures under intraoperative Magnetic Resonance Imaging (MRI) guidance. MRI is an ideal imaging modality for surgical guidance of diagnostic and therapeutic procedures, thanks to its ability to perform high resolution, real-time, and high soft tissue contrast imaging without ionizing radiation. However, the strong magnetic field and sensitivity to radio frequency signals, as well as tightly confined scanner bore render great challenges to developing robotic systems within MRI environment. Discussed are potential solutions to address engineering topics related to development of MRI-compatible electro-mechanical systems and modeling of steerable needle interventions. A robotic framework is developed based on a modular design approach, supporting varying MRI-guided interventional procedures, with stereotactic neurosurgery and prostate cancer therapy as two driving exemplary applications. A piezoelectrically actuated electro-mechanical system is designed to provide precise needle placement in the bore of the scanner under interactive MRI-guidance, while overcoming the challenges inherent to MRI-guided procedures. This work presents the development of the robotic system in the aspects of requirements definition, clinical work flow development, mechanism optimization, control system design and experimental evaluation. A steerable needle is beneficial for interventional procedures with its capability to produce curved path, avoiding anatomical obstacles or compensating for needle placement errors. Two kinds of steerable needles are discussed, i.e. asymmetric-tip needle and concentric-tube cannula. A novel Gaussian-based ContinUous Rotation and Variable-curvature (CURV) model is proposed to steer asymmetric-tip needle, which enables variable curvature of the needle trajectory with independent control of needle rotation and insertion. While concentric-tube cannula is suitable for clinical applications where a curved trajectory is needed without relying on tissue interaction force. This dissertation addresses fundamental challenges in developing and deploying MRI-compatible robotic systems, and enables the technologies for MRI-guided needle-based interventions. This study applied and evaluated these techniques to a system for prostate biopsy that is currently in clinical trials, developed a neurosurgery robot prototype for interstitial thermal therapy of brain cancer under MRI guidance, and demonstrated needle steering using both asymmetric tip and pre-bent concentric-tube cannula approaches on a testbed

    Vision-based and marker-less surgical tool detection and tracking: a review of the literature

    Get PDF
    In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: “surgical tool detection”, “surgical tool tracking”, “surgical instrument detection” and “surgical instrument tracking” limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement

    New Technology and Techniques for Needle-Based Magnetic Resonance Image-Guided Prostate Focal Therapy

    Get PDF
    The most common diagnosis of prostate cancer is that of localized disease, and unfortunately the optimal type of treatment for these men is not yet certain. Magnetic resonance image (MRI)-guided focal laser ablation (FLA) therapy is a promising potential treatment option for select men with localized prostate cancer, and may result in fewer side effects than whole-gland therapies, while still achieving oncologic control. The objective of this thesis was to develop methods of accurately guiding needles to the prostate within the bore of a clinical MRI scanner for MRI-guided FLA therapy. To achieve this goal, a mechatronic needle guidance system was developed. The system enables precise targeting of prostate tumours through angulated trajectories and insertion of needles with the patient in the bore of a clinical MRI scanner. After confirming sufficient accuracy in phantoms, and good MRI-compatibility, the system was used to guide needles for MRI-guided FLA therapy in eight patients. Results from this case series demonstrated an improvement in needle guidance time and ease of needle delivery compared to conventional approaches. Methods of more reliable treatment planning were sought, leading to the development of a systematic treatment planning method, and Monte Carlo simulations of needle placement uncertainty. The result was an estimate of the maximum size of focal target that can be confidently ablated using the mechatronic needle guidance system, leading to better guidelines for patient eligibility. These results also quantified the benefit that could be gained with improved techniques for needle guidance

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Medical SLAM in an autonomous robotic system

    Get PDF
    One of the main challenges for computer-assisted surgery (CAS) is to determine the intra-operative morphology and motion of soft-tissues. This information is prerequisite to the registration of multi-modal patient-specific data for enhancing the surgeon’s navigation capabilities by observing beyond exposed tissue surfaces and for providing intelligent control of robotic-assisted instruments. In minimally invasive surgery (MIS), optical techniques are an increasingly attractive approach for in vivo 3D reconstruction of the soft-tissue surface geometry. This thesis addresses the ambitious goal of achieving surgical autonomy, through the study of the anatomical environment by Initially studying the technology present and what is needed to analyze the scene: vision sensors. A novel endoscope for autonomous surgical task execution is presented in the first part of this thesis. Which combines a standard stereo camera with a depth sensor. This solution introduces several key advantages, such as the possibility of reconstructing the 3D at a greater distance than traditional endoscopes. Then the problem of hand-eye calibration is tackled, which unites the vision system and the robot in a single reference system. Increasing the accuracy in the surgical work plan. In the second part of the thesis the problem of the 3D reconstruction and the algorithms currently in use were addressed. In MIS, simultaneous localization and mapping (SLAM) can be used to localize the pose of the endoscopic camera and build ta 3D model of the tissue surface. Another key element for MIS is to have real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy. Starting from the ORB-SLAM algorithm we have modified the architecture to make it usable in an anatomical environment by adding the registration of the pre-operative information of the intervention to the map obtained from the SLAM. Once it has been proven that the slam algorithm is usable in an anatomical environment, it has been improved by adding semantic segmentation to be able to distinguish dynamic features from static ones. All the results in this thesis are validated on training setups, which mimics some of the challenges of real surgery and on setups that simulate the human body within Autonomous Robotic Surgery (ARS) and Smart Autonomous Robotic Assistant Surgeon (SARAS) projects

    Vision-based retargeting for endoscopic navigation

    Get PDF
    Endoscopy is a standard procedure for visualising the human gastrointestinal tract. With the advances in biophotonics, imaging techniques such as narrow band imaging, confocal laser endomicroscopy, and optical coherence tomography can be combined with normal endoscopy for assisting the early diagnosis of diseases, such as cancer. In the past decade, optical biopsy has emerged to be an effective tool for tissue analysis, allowing in vivo and in situ assessment of pathological sites with real-time feature-enhanced microscopic images. However, the non-invasive nature of optical biopsy leads to an intra-examination retargeting problem, which is associated with the difficulty of re-localising a biopsied site consistently throughout the whole examination. In addition to intra-examination retargeting, retargeting of a pathological site is even more challenging across examinations, due to tissue deformation and changing tissue morphologies and appearances. The purpose of this thesis is to address both the intra- and inter-examination retargeting problems associated with optical biopsy. We propose a novel vision-based framework for intra-examination retargeting. The proposed framework is based on combining visual tracking and detection with online learning of the appearance of the biopsied site. Furthermore, a novel cascaded detection approach based on random forests and structured support vector machines is developed to achieve efficient retargeting. To cater for reliable inter-examination retargeting, the solution provided in this thesis is achieved by solving an image retrieval problem, for which an online scene association approach is proposed to summarise an endoscopic video collected in the first examination into distinctive scenes. A hashing-based approach is then used to learn the intrinsic representations of these scenes, such that retargeting can be achieved in subsequent examinations by retrieving the relevant images using the learnt representations. For performance evaluation of the proposed frameworks, extensive phantom, ex vivo and in vivo experiments have been conducted, with results demonstrating the robustness and potential clinical values of the methods proposed.Open Acces

    Augmented reality (AR) for surgical robotic and autonomous systems: State of the art, challenges, and solutions

    Get PDF
    Despite the substantial progress achieved in the development and integration of augmented reality (AR) in surgical robotic and autonomous systems (RAS), the center of focus in most devices remains on improving end-effector dexterity and precision, as well as improved access to minimally invasive surgeries. This paper aims to provide a systematic review of different types of state-of-the-art surgical robotic platforms while identifying areas for technological improvement. We associate specific control features, such as haptic feedback, sensory stimuli, and human-robot collaboration, with AR technology to perform complex surgical interventions for increased user perception of the augmented world. Current researchers in the field have, for long, faced innumerable issues with low accuracy in tool placement around complex trajectories, pose estimation, and difficulty in depth perception during two-dimensional medical imaging. A number of robots described in this review, such as Novarad and SpineAssist, are analyzed in terms of their hardware features, computer vision systems (such as deep learning algorithms), and the clinical relevance of the literature. We attempt to outline the shortcomings in current optimization algorithms for surgical robots (such as YOLO and LTSM) whilst providing mitigating solutions to internal tool-to-organ collision detection and image reconstruction. The accuracy of results in robot end-effector collisions and reduced occlusion remain promising within the scope of our research, validating the propositions made for the surgical clearance of ever-expanding AR technology in the future

    Towards a framework for multi class statistical modelling of shape, intensity, and kinematics in medical images

    Get PDF
    Statistical modelling has become a ubiquitous tool for analysing of morphological variation of bone structures in medical images. For radiological images, the shape, relative pose between the bone structures and the intensity distribution are key features often modelled separately. A wide range of research has reported methods that incorporate these features as priors for machine learning purposes. Statistical shape, appearance (intensity profile in images) and pose models are popular priors to explain variability across a sample population of rigid structures. However, a principled and robust way to combine shape, pose and intensity features has been elusive for four main reasons: 1) heterogeneity of the data (data with linear and non-linear natural variation across features); 2) sub-optimal representation of three-dimensional Euclidean motion; 3) artificial discretization of the models; and 4) lack of an efficient transfer learning process to project observations into the latent space. This work proposes a novel statistical modelling framework for multiple bone structures. The framework provides a latent space embedding shape, pose and intensity in a continuous domain allowing for new approaches to skeletal joint analysis from medical images. First, a robust registration method for multi-volumetric shapes is described. Both sampling and parametric based registration algorithms are proposed, which allow the establishment of dense correspondence across volumetric shapes (such as tetrahedral meshes) while preserving the spatial relationship between them. Next, the framework for developing statistical shape-kinematics models from in-correspondence multi-volumetric shapes embedding image intensity distribution, is presented. The framework incorporates principal geodesic analysis and a non-linear metric for modelling the spatial orientation of the structures. More importantly, as all the features are in a joint statistical space and in a continuous domain; this permits on-demand marginalisation to a region or feature of interest without training separate models. Thereafter, an automated prediction of the structures in images is facilitated by a model-fitting method leveraging the models as priors in a Markov chain Monte Carlo approach. The framework is validated using controlled experimental data and the results demonstrate superior performance in comparison with state-of-the-art methods. Finally, the application of the framework for analysing computed tomography images is presented. The analyses include estimation of shape, kinematic and intensity profiles of bone structures in the shoulder and hip joints. For both these datasets, the framework is demonstrated for segmentation, registration and reconstruction, including the recovery of patient-specific intensity profile. The presented framework realises a new paradigm in modelling multi-object shape structures, allowing for probabilistic modelling of not only shape, but also relative pose and intensity as well as the correlations that exist between them. Future work will aim to optimise the framework for clinical use in medical image analysis
    • …
    corecore