72 research outputs found

    Feature Aggregation Decoder for Segmenting Laparoscopic Scenes

    Get PDF
    Laparoscopic scene segmentation is one of the key building blocks required for developing advanced computer assisted interventions and robotic automation. Scene segmentation approaches often rely on encoder-decoder architectures that encode a representation of the input to be decoded to semantic pixel labels. In this paper, we propose to use the deep Xception model for the encoder and a simple yet effective decoder that relies on a feature aggregation module. Our feature aggregation module constructs a mapping function that reuses and transfers encoder features and combines information across all feature scales to build a richer representation that keeps both high-level context and low-level boundary information. We argue that this aggregation module enables us to simplify the decoder and reduce the number of parameters in the decoder. We have evaluated our approach on two datasets and our experimental results show that our model outperforms state-of-the-art models on the same experimental setup and significantly improves the previous results, 98.44% vs 89.00% , on the EndoVis’15 dataset

    EasyLabels: weak labels for scene segmentation in laparoscopic videos

    Get PDF
    PURPOSE: We present a different approach for annotating laparoscopic images for segmentation in a weak fashion and experimentally prove that its accuracy when trained with partial cross-entropy is close to that obtained with fully supervised approaches. METHODS: We propose an approach that relies on weak annotations provided as stripes over the different objects in the image and partial cross-entropy as the loss function of a fully convolutional neural network to obtain a dense pixel-level prediction map. RESULTS: We validate our method on three different datasets, providing qualitative results for all of them and quantitative results for two of them. The experiments show that our approach is able to obtain at least [Formula: see text] of the accuracy obtained with fully supervised methods for all the tested datasets, while requiring [Formula: see text][Formula: see text] less time to create the annotations compared to full supervision. CONCLUSIONS: With this work, we demonstrate that laparoscopic data can be segmented using very few annotated data while maintaining levels of accuracy comparable to those obtained with full supervision

    Efeito das mudanças climáticas sobre a aptidão climática para cana-de-açúcar no Estado de Goiás.

    Get PDF
    Neste artigo, avaliaram-se os impactos do aumento da temperatura na delimitação das áreas aptas e com restrição climática à cana-de-açúcar no Estado de Goiás, baseando-se no quarto relatório do IPCC no que se refere à variação da temperatura do ar, mas sem considerar a variação no regime de chuvas

    Automated operative workflow analysis of endoscopic pituitary surgery using machine learning: development and preclinical evaluation (IDEAL stage 0)

    Get PDF
    OBJECTIVE: Surgical workflow analysis involves systematically breaking down operations into key phases and steps. Automatic analysis of this workflow has potential uses for surgical training, preoperative planning, and outcome prediction. Recent advances in machine learning (ML) and computer vision have allowed accurate automated workflow analysis of operative videos. In this Idea, Development, Exploration, Assessment, Long-term study (IDEAL) stage 0 study, the authors sought to use Touch Surgery for the development and validation of an ML-powered analysis of phases and steps in the endoscopic transsphenoidal approach (eTSA) for pituitary adenoma resection, a first for neurosurgery. METHODS: The surgical phases and steps of 50 anonymized eTSA operative videos were labeled by expert surgeons. Forty videos were used to train a combined convolutional and recurrent neural network model by Touch Surgery. Ten videos were used for model evaluation (accuracy, F1 score), comparing the phase and step recognition of surgeons to the automatic detection of the ML model. RESULTS: The longest phase was the sellar phase (median 28 minutes), followed by the nasal phase (median 22 minutes) and the closure phase (median 14 minutes). The longest steps were step 5 (tumor identification and excision, median 17 minutes); step 3 (posterior septectomy and removal of sphenoid septations, median 14 minutes); and step 4 (anterior sellar wall removal, median 10 minutes). There were substantial variations within the recorded procedures in terms of video appearances, step duration, and step order, with only 50% of videos containing all 7 steps performed sequentially in numerical order. Despite this, the model was able to output accurate recognition of surgical phases (91% accuracy, 90% F1 score) and steps (76% accuracy, 75% F1 score). CONCLUSIONS: In this IDEAL stage 0 study, ML techniques have been developed to automatically analyze operative videos of eTSA pituitary surgery. This technology has previously been shown to be acceptable to neurosurgical teams and patients. ML-based surgical workflow analysis has numerous potential uses-such as education (e.g., automatic indexing of contemporary operative videos for teaching), improved operative efficiency (e.g., orchestrating the entire surgical team to a common workflow), and improved patient outcomes (e.g., comparison of surgical techniques or early detection of adverse events). Future directions include the real-time integration of Touch Surgery into the live operative environment as an IDEAL stage 1 (first-in-human) study, and further development of underpinning ML models using larger data sets

    Efeito das mudanças climáticas sobre a aptidão climática para cana-de-açúcar no Estado de São Paulo.

    Get PDF
    Neste artigo, avaliou-se os impactos do aumento da temperatura no zoneamento de aptidão climática para cana-de-açúcar no Estado de São Paulo, baseando-se no quarto relatório do IPCC no que se refere as previsões de temperatura do ar e admitindo que o regime de chuvas fosse mantido

    Genome-Wide Progesterone Receptor Binding: Cell Type-Specific and Shared Mechanisms in T47D Breast Cancer Cells and Primary Leiomyoma Cells

    Get PDF
    Progesterone, via its nuclear receptor (PR), exerts an overall tumorigenic effect on both uterine fibroid (leiomyoma) and breast cancer tissues, whereas the antiprogestin RU486 inhibits growth of these tissues through an unknown mechanism. Here, we determined the interaction between common or cell-specific genome-wide binding sites of PR and mRNA expression in RU486-treated uterine leiomyoma and breast cancer cells.ChIP-sequencing revealed 31,457 and 7,034 PR-binding sites in breast cancer and uterine leiomyoma cells, respectively; 1,035 sites overlapped in both cell types. Based on the chromatin-PR interaction in both cell types, we statistically refined the consensus progesterone response element to G•ACA• • •TGT•C. We identified two striking differences between uterine leiomyoma and breast cancer cells. First, the cis-regulatory elements for HSF, TEF-1, and C/EBPα and β were statistically enriched at genomic RU486/PR-targets in uterine leiomyoma, whereas E2F, FOXO1, FOXA1, and FOXF sites were preferentially enriched in breast cancer cells. Second, 51.5% of RU486-regulated genes in breast cancer cells but only 6.6% of RU486-regulated genes in uterine leiomyoma cells contained a PR-binding site within 5 kb from their transcription start sites (TSSs), whereas 75.4% of RU486-regulated genes contained a PR-binding site farther than 50 kb from their TSSs in uterine leiomyoma cells. RU486 regulated only seven mRNAs in both cell types. Among these, adipophilin (PLIN2), a pro-differentiation gene, was induced via RU486 and PR via the same regulatory region in both cell types.Our studies have identified molecular components in a RU486/PR-controlled gene network involved in the regulation of cell growth, cell migration, and extracellular matrix function. Tissue-specific and common patterns of genome-wide PR binding and gene regulation may determine the therapeutic effects of antiprogestins in uterine fibroids and breast cancer

    Smart materials as scaffolds for tissue engineering

    No full text
    In this review, we focused our attention on the more important natural extracellular matrix (ECM) molecules (collagen and fibrin), employed as cellular scaffolds for tissue engineering and on a class of semi-synthetic materials made from the fusion of specific oligopeptide sequences, showing biological activities, with synthetic materials. In particular, these new "intelligent" scaffolds may contain oligopeptide cleaving sequences specific for matrix metalloproteinases (MMPs), integrin binding domains, growth factors, anti-thrombin sequences, plasmin degradation sites, and morphogenetic proteins. The aim was to confer to these new "intelligent" semi-synthetic biomaterials, the advantages offered by both the synthetic materials (processability, mechanical strength) and by the natural materials (specific cell recognition, cellular invasion, and the ability to supply differentiation/proliferation signals). Due to their characteristics, these semi-synthetic biomaterials represent a new and versatile class of biomimetic hybrid materials that hold clinical promise in serving as implants to promote wound healing and tissue regeneration

    Self-knowledge distillation for surgical phase recognition

    No full text
    Purpose: Advances in surgical phase recognition are generally led by training deeper networks. Rather than going further with a more complex solution, we believe that current models can be exploited better. We propose a self-knowledge distillation framework that can be integrated into current state-of-the-art (SOTA) models without requiring any extra complexity to the models or annotations. Methods: Knowledge distillation is a framework for network regularization where knowledge is distilled from a teacher network to a student network. In self-knowledge distillation, the student model becomes the teacher such that the network learns from itself. Most phase recognition models follow an encoder-decoder framework. Our framework utilizes self-knowledge distillation in both stages. The teacher model guides the training process of the student model to extract enhanced feature representations from the encoder and build a more robust temporal decoder to tackle the over-segmentation problem. Results: We validate our proposed framework on the public dataset Cholec80. Our framework is embedded on top of four popular SOTA approaches and consistently improves their performance. Specifically, our best GRU model boosts performance by + 3.33% accuracy and + 3.95% F1-score over the same baseline model. Conclusion: We embed a self-knowledge distillation framework for the first time in the surgical phase recognition training pipeline. Experimental results demonstrate that our simple yet powerful framework can improve performance of existing phase recognition models. Moreover, our extensive experiments show that even with 75% of the training set we still achieve performance on par with the same baseline model trained on the full set

    Childhood hepatocellular tumors in FAP

    No full text
    corecore