25 research outputs found

    EasyLabels: weak labels for scene segmentation in laparoscopic videos

    Get PDF
    PURPOSE: We present a different approach for annotating laparoscopic images for segmentation in a weak fashion and experimentally prove that its accuracy when trained with partial cross-entropy is close to that obtained with fully supervised approaches. METHODS: We propose an approach that relies on weak annotations provided as stripes over the different objects in the image and partial cross-entropy as the loss function of a fully convolutional neural network to obtain a dense pixel-level prediction map. RESULTS: We validate our method on three different datasets, providing qualitative results for all of them and quantitative results for two of them. The experiments show that our approach is able to obtain at least [Formula: see text] of the accuracy obtained with fully supervised methods for all the tested datasets, while requiring [Formula: see text][Formula: see text] less time to create the annotations compared to full supervision. CONCLUSIONS: With this work, we demonstrate that laparoscopic data can be segmented using very few annotated data while maintaining levels of accuracy comparable to those obtained with full supervision

    Feature Aggregation Decoder for Segmenting Laparoscopic Scenes

    Get PDF
    Laparoscopic scene segmentation is one of the key building blocks required for developing advanced computer assisted interventions and robotic automation. Scene segmentation approaches often rely on encoder-decoder architectures that encode a representation of the input to be decoded to semantic pixel labels. In this paper, we propose to use the deep Xception model for the encoder and a simple yet effective decoder that relies on a feature aggregation module. Our feature aggregation module constructs a mapping function that reuses and transfers encoder features and combines information across all feature scales to build a richer representation that keeps both high-level context and low-level boundary information. We argue that this aggregation module enables us to simplify the decoder and reduce the number of parameters in the decoder. We have evaluated our approach on two datasets and our experimental results show that our model outperforms state-of-the-art models on the same experimental setup and significantly improves the previous results, 98.44% vs 89.00% , on the EndoVis’15 dataset

    Surreal: Enhancing Surgical simulation Realism using style transfer

    Get PDF
    Surgical simulation is an increasingly important element of surgical education. Using simulation can be a means to address some of the significant challenges in developing surgical skills with limited time and resources. The photo-realistic fidelity of simulations is a key feature that can improve the experience and transfer ratio of trainees. In this paper, we demonstrate how we can enhance the visual fidelity of existing surgical simulation by performing style transfer of multi-class labels from real surgical video onto synthetic content. We demonstrate our approach on simulations of cataract surgery using real data labels from an existing public dataset. Our results highlight the feasibility of the approach and also the powerful possibility to extend this technique to incorporate additional temporal constraints and to different applications

    Can surgical simulation be used to train detection and classification of neural networks?

    Get PDF
    Computer-assisted interventions (CAI) aim to increase the effectiveness, precision and repeatability of procedures to improve surgical outcomes. The presence and motion of surgical tools is a key information input for CAI surgical phase recognition algorithms. Vision-based tool detection and recognition approaches are an attractive solution and can be designed to take advantage of the powerful deep learning paradigm that is rapidly advancing image recognition and classification. The challenge for such algorithms is the availability and quality of labelled data used for training. In this Letter, surgical simulation is used to train tool detection and segmentation based on deep convolutional neural networks and generative adversarial networks. The authors experiment with two network architectures for image segmentation in tool classes commonly encountered during cataract surgery. A commercially-available simulator is used to create a simulated cataract dataset for training models prior to performing transfer learning on real surgical data. To the best of authors' knowledge, this is the first attempt to train deep learning models for surgical instrument detection on simulated data while demonstrating promising results to generalise on real data. Results indicate that simulated data does have some potential for training advanced classification methods for CAI systems

    CaDIS: Cataract dataset for surgical RGB-image segmentation

    Get PDF
    Video feedback provides a wealth of information about surgical procedures and is the main sensory cue for surgeons. Scene understanding is crucial to computer assisted interventions (CAI) and to post-operative analysis of the surgical procedure. A fundamental building block of such capabilities is the identification and localization of surgical instruments and anatomical structures through semantic segmentation. Deep learning has advanced semantic segmentation techniques in the recent years but is inherently reliant on the availability of labelled datasets for model training. This paper introduces a dataset for semantic segmentation of cataract surgery videos complementing the publicly available CATARACTS challenge dataset. In addition, we benchmark the performance of several state-of-the-art deep learning models for semantic segmentation on the presented dataset. The dataset is publicly available at https://cataracts-semantic-segmentation2020.grand-challenge.org/

    Comparison of spinal cord stimulation profiles from intra- and extradural electrode arrangements by finite element modelling

    No full text
    Spinal cord stimulation currently relies on extradural electrode arrays that are separated from the spinal cord surface by a highly conducting layer of cerebrospinal fluid. It has recently been suggested that intradural placement of the electrodes in direct contact with the pial surface could greatly enhance the specificity and efficiency of stimulation. The present computational study aims at quantifying and comparing the electrical current distributions as well as the spatial recruitment profiles resulting from extra- and intra-dural electrode arrangements. The electrical potential distribution is calculated using a 3D finite element model of the human thoracic spinal canal. The likely recruitment areas are then obtained using the potential as input to an equivalent circuit model of the pre-threshold axonal response. The results show that the current threshold to recruitment of axons in the dorsal column is more than an order of magnitude smaller for intradural than extradural stimulation. Intradural placement of the electrodes also leads to much higher contrast between the stimulation thresholds for the dorsal root entry zone and the dorsal column, allowing better focusing of the stimulus

    2018 Robotic Scene Segmentation Challenge

    Get PDF
    In 2015 we began a sub-challenge at the EndoVis workshop at MICCAI in Munich using endoscope images of ex-vivo tissue with automatically generated annotations from robot forward kinematics and instrument CAD models. However, the limited background variation and simple motion rendered the dataset uninformative in learning about which techniques would be suitable for segmentation in real surgery. In 2017, at the same workshop in Quebec we introduced the robotic instrument segmentation dataset with 10 teams participating in the challenge to perform binary, articulating parts and type segmentation of da Vinci instruments. This challenge included realistic instrument motion and more complex porcine tissue as background and was widely addressed with modifications on U-Nets and other popular CNN architectures. In 2018 we added to the complexity by introducing a set of anatomical objects and medical devices to the segmented classes. To avoid over-complicating the challenge, we continued with porcine data which is dramatically simpler than human tissue due to the lack of fatty tissue occluding many organs

    A Unique Case of Primary Ewing’s Sarcoma of the Cervical Spine in a 53-Year-Old Male: A Case Report and Review of the Literature

    No full text
    Extraskeletal Ewing’s sarcoma (EES) is a rare presentation, representing only 15% of all primary Ewing’s sarcoma cases. Even more uncommon is EES presenting as a primary focus in the spinal canal. These rapidly growing tumors often present with focal neurological symptoms of myelopathy or radiculopathy. There are no classic characteristic imaging findings and thus the physician must keep a high index of clinical suspicion. Diagnosis can only be definitively made by histopathological studies. In this report, we discuss a primary cervical spine EES in a 53-year-old man who presented with a two-month history of left upper extremity pain and acute onset of weakness. Imaging revealed a cervical spinal canal mass. After undergoing cervical decompression, histopathological examination confirmed a diagnosis of Ewing’s sarcoma. A literature search revealed fewer than 25 reported cases of primary cervical spine EES published in the past 15 years and only one report demonstrating this pathology in a patient older than 30 years of age age=38. Given the low incidence of this pathology presenting in this age group and the lack of treatment guidelines, each patient’s plan should be considered on a case-by-case basis until further studies are performed to determine optimal evidence based treatment

    Temporally organized representations of reward and risk in the human brain

    No full text
    Abstract The value and uncertainty associated with choice alternatives constitute critical features relevant for decisions. However, the manner in which reward and risk representations are temporally organized in the brain remains elusive. Here we leverage the spatiotemporal precision of intracranial electroencephalography, along with a simple card game designed to elicit the unfolding computation of a set of reward and risk variables, to uncover this temporal organization. Reward outcome representations across wide-spread regions follow a sequential order along the anteroposterior axis of the brain. In contrast, expected value can be decoded from multiple regions at the same time, and error signals in both reward and risk domains reflect a mixture of sequential and parallel encoding. We further highlight the role of the anterior insula in generalizing between reward prediction error and risk prediction error codes. Together our results emphasize the importance of neural dynamics for understanding value-based decisions under uncertainty

    Sheep 1: Event related cortical oscillations.

    No full text
    <p>A) Average evoked potential distribution for epidural (blue) and intradural (red) spinal cord stimulation at 0 mm with 5 volts. Major cortical sulci are denoted by dotted lines. B) Time-varying high-gamma band (75–150 Hz) envelope for epidural (blue) and intradural (red) spinal cord stimulation in selected channels. C) Time-Frequency analysis of two representative channels during epidural stimulation. The y-axis denotes frequency in hertz and the x axis denotes time in seconds centered at stimulus onset. The color scale represents relative power change with respect to pre-stimulus values in decibels (dB).</p
    corecore