135 research outputs found

    Concurrent Segmentation and Localization for Tracking of Surgical Instruments

    Full text link
    Real-time instrument tracking is a crucial requirement for various computer-assisted interventions. In order to overcome problems such as specular reflections and motion blur, we propose a novel method that takes advantage of the interdependency between localization and segmentation of the surgical tool. In particular, we reformulate the 2D instrument pose estimation as heatmap regression and thereby enable a concurrent, robust and near real-time regression of both tasks via deep learning. As demonstrated by our experimental results, this modeling leads to a significantly improved performance than directly regressing the tool position and allows our method to outperform the state of the art on a Retinal Microsurgery benchmark and the MICCAI EndoVis Challenge 2015.Comment: I. Laina and N. Rieke contributed equally to this work. Accepted to MICCAI 201

    Vision-based and marker-less surgical tool detection and tracking: a review of the literature

    Get PDF
    In recent years, tremendous progress has been made in surgical practice for example with Minimally Invasive Surgery (MIS). To overcome challenges coming from deported eye-to-hand manipulation, robotic and computer-assisted systems have been developed. Having real-time knowledge of the pose of surgical tools with respect to the surgical camera and underlying anatomy is a key ingredient for such systems. In this paper, we present a review of the literature dealing with vision-based and marker-less surgical tool detection. This paper includes three primary contributions: (1) identification and analysis of data-sets used for developing and testing detection algorithms, (2) in-depth comparison of surgical tool detection methods from the feature extraction process to the model learning strategy and highlight existing shortcomings, and (3) analysis of validation techniques employed to obtain detection performance results and establish comparison between surgical tool detectors. The papers included in the review were selected through PubMed and Google Scholar searches using the keywords: “surgical tool detection”, “surgical tool tracking”, “surgical instrument detection” and “surgical instrument tracking” limiting results to the year range 2000 2015. Our study shows that despite significant progress over the years, the lack of established surgical tool data-sets, and reference format for performance assessment and method ranking is preventing faster improvement

    Body Wall Force Sensor for Simulated Minimally Invasive Surgery: Application to Fetal Surgery

    Get PDF
    Surgical interventions are increasingly executed minimal invasively. Surgeons insert instruments through tiny incisions in the body and pivot slender instruments to treat organs or tissue below the surface. While a blessing for patients, surgeons need to pay extra attention to overcome the fulcrum effect, reduced haptic feedback and deal with lost hand-eye coordination. The mental load makes it difficult to pay sufficient attention to the forces that are exerted on the body wall. In delicate procedures such as fetal surgery, this might be problematic as irreparable damage could cause premature delivery. As a first attempt to quantify the interaction forces applied on the patient's body wall, a novel 6 degrees of freedom force sensor was developed for an ex-vivo set up. The performance of the sensor was characterised. User experiments were conducted by 3 clinicians on a set up simulating a fetal surgical intervention. During these simulated interventions, the interaction forces were recorded and analysed when a normal instrument was employed. These results were compared with a session where a flexible instrument under haptic guidance was used. The conducted experiments resulted in interesting insights in the interaction forces and stresses that develop during such difficult surgical intervention. The results also implicated that haptic guidance schemes and the use of flexible instruments rather than rigid ones could have a significant impact on the stresses that occur at the body wall

    Synthetic and Real Inputs for Tool Segmentation in Robotic Surgery

    Get PDF
    Semantic tool segmentation in surgical videos is important for surgical scene understanding and computer-assisted interventions as well as for the development of robotic automation. The problem is challenging because different illumination conditions, bleeding, smoke and occlusions can reduce algorithm robustness. At present labelled data for training deep learning models is still lacking for semantic surgical instrument segmentation and in this paper we show that it may be possible to use robot kinematic data coupled with laparoscopic images to alleviate the labelling problem. We propose a new deep learning based model for parallel processing of both laparoscopic and simulation images for robust segmentation of surgical tools. Due to the lack of laparoscopic frames annotated with both segmentation ground truth and kinematic information a new custom dataset was generated using the da Vinci Research Kit (dVRK) and is made available

    Simultaneous recognition and pose estimation of instruments in minimally invasive surgery

    Get PDF
    Detection of surgical instruments plays a key role in ensuring patient safety in minimally invasive surgery. In this paper, we present a novel method for 2D vision-based recognition and pose estimation of surgical instruments that generalizes to different surgical applications. At its core, we propose a novel scene model in order to simultaneously recognize multiple instruments as well as their parts. We use a Convolutional Neural Network architecture to embody our model and show that the cross-entropy loss is well suited to optimize its parameters which can be trained in an end-to-end fashion. An additional advantage of our approach is that instrument detection at test time is achieved while avoiding the need for scale-dependent sliding window evaluation. This allows our approach to be relatively parameter free at test time and shows good performance for both instrument detection and tracking. We show that our approach surpasses state-of-the-art results on in-vivo retinal microsurgery image data, as well as ex-vivo laparoscopic sequences

    Robustness of circadian clocks to daylight fluctuations: hints from the picoeucaryote Ostreococcus tauri

    Get PDF
    The development of systemic approaches in biology has put emphasis on identifying genetic modules whose behavior can be modeled accurately so as to gain insight into their structure and function. However most gene circuits in a cell are under control of external signals and thus quantitative agreement between experimental data and a mathematical model is difficult. Circadian biology has been one notable exception: quantitative models of the internal clock that orchestrates biological processes over the 24-hour diurnal cycle have been constructed for a few organisms, from cyanobacteria to plants and mammals. In most cases, a complex architecture with interlocked feedback loops has been evidenced. Here we present first modeling results for the circadian clock of the green unicellular alga Ostreococcus tauri. Two plant-like clock genes have been shown to play a central role in Ostreococcus clock. We find that their expression time profiles can be accurately reproduced by a minimal model of a two-gene transcriptional feedback loop. Remarkably, best adjustment of data recorded under light/dark alternation is obtained when assuming that the oscillator is not coupled to the diurnal cycle. This suggests that coupling to light is confined to specific time intervals and has no dynamical effect when the oscillator is entrained by the diurnal cycle. This intringuing property may reflect a strategy to minimize the impact of fluctuations in daylight intensity on the core circadian oscillator, a type of perturbation that has been rarely considered when assessing the robustness of circadian clocks

    Glioblastoma surgery imaging—reporting and data system: Standardized reporting of tumor volume, location, and resectability based on automated segmentations

    Get PDF
    Treatment decisions for patients with presumed glioblastoma are based on tumor characteristics available from a preoperative MR scan. Tumor characteristics, including volume, location, and resectability, are often estimated or manually delineated. This process is time consuming and subjective. Hence, comparison across cohorts, trials, or registries are subject to assessment bias. In this study, we propose a standardized Glioblastoma Surgery Imaging Reporting and Data System (GSI-RADS) based on an automated method of tumor segmentation that provides standard reports on tumor features that are potentially relevant for glioblastoma surgery. As clinical validation, we determine the agreement in extracted tumor features between the automated method and the current standard of manual segmentations from routine clinical MR scans before treatment. In an observational consecutive cohort of 1596 adult patients with a first time surgery of a glioblastoma from 13 institutions, we segmented gadolinium-enhanced tumor parts both by a human rater and by an automated algorithm. Tumor features were extracted from segmentations of both methods and compared to assess differences, concordance, and equivalence. The laterality, contralateral infiltration, and the laterality indices were in excellent agreement. The native and normalized tumor volumes had excellent agreement, consistency, and equivalence. Multifocality, but not the number of foci, had good agreement and equivalence. The location profiles of cortical and subcortical structures were in excellent agreement. The expected residual tumor volumes and resectability indices had excellent agreement, consistency, and equivalence. Tumor probability maps were in good agreement. In conclusion, automated segmentations are in excellent agreement with manual segmentations and practically equivalent regarding tumor features that are potentially relevant for neurosurgical purposes. Standard GSI-RADS reports can be generated by open access software

    Glioblastoma Surgery Imaging-Reporting and Data System: Validation and Performance of the Automated Segmentation Task

    Get PDF
    For patients with presumed glioblastoma, essential tumor characteristics are determined from preoperative MR images to optimize the treatment strategy. This procedure is time-consuming and subjective, if performed by crude eyeballing or manually. The standardized GSI-RADS aims to provide neurosurgeons with automatic tumor segmentations to extract tumor features rapidly and objectively. In this study, we improved automatic tumor segmentation and compared the agreement with manual raters, describe the technical details of the different components of GSI-RADS, and determined their speed. Two recent neural network architectures were considered for the segmentation task: nnU-Net and AGU-Net. Two preprocessing schemes were introduced to investigate the tradeoff between performance and processing speed. A summarized description of the tumor feature extraction and standardized reporting process is included. The trained architectures for automatic segmentation and the code for computing the standardized report are distributed as open-source and as open-access software. Validation studies were performed on a dataset of 1594 gadolinium-enhanced T1-weighted MRI volumes from 13 hospitals and 293 T1-weighted MRI volumes from the BraTS challenge. The glioblastoma tumor core segmentation reached a Dice score slightly below 90%, a patientwise F1-score close to 99%, and a 95th percentile Hausdorff distance slightly below 4.0 mm on average with either architecture and the heavy preprocessing scheme. A patient MRI volume can be segmented in less than one minute, and a standardized report can be generated in up to five minutes. The proposed GSI-RADS software showed robust performance on a large collection of MRI volumes from various hospitals and generated results within a reasonable runtime

    Segmentation of glioblastomas in early post-operative multi-modal MRI with deep neural networks

    Get PDF
    Extent of resection after surgery is one of the main prognostic factors for patients diagnosed with glioblastoma. To achieve this, accurate segmentation and classification of residual tumor from post-operative MR images is essential. The current standard method for estimating it is subject to high inter- and intra-rater variability, and an automated method for segmentation of residual tumor in early post-operative MRI could lead to a more accurate estimation of extent of resection. In this study, two state-of-the-art neural network architectures for pre-operative segmentation were trained for the task. The models were extensively validated on a multicenter dataset with nearly 1000 patients, from 12 hospitals in Europe and the United States. The best performance achieved was a 61% Dice score, and the best classification performance was about 80% balanced accuracy, with a demonstrated ability to generalize across hospitals. In addition, the segmentation performance of the best models was on par with human expert raters. The predicted segmentations can be used to accurately classify the patients into those with residual tumor, and those with gross total resection
    • …
    corecore