36 research outputs found

    Deep Selection: A Fully Supervised Camera Selection Network for Surgery Recordings

    Full text link
    Recording surgery in operating rooms is an essential task for education and evaluation of medical treatment. However, recording the desired targets, such as the surgery field, surgical tools, or doctor's hands, is difficult because the targets are heavily occluded during surgery. We use a recording system in which multiple cameras are embedded in the surgical lamp, and we assume that at least one camera is recording the target without occlusion at any given time. As the embedded cameras obtain multiple video sequences, we address the task of selecting the camera with the best view of the surgery. Unlike the conventional method, which selects the camera based on the area size of the surgery field, we propose a deep neural network that predicts the camera selection probability from multiple video sequences by learning the supervision of the expert annotation. We created a dataset in which six different types of plastic surgery are recorded, and we provided the annotation of camera switching. Our experiments show that our approach successfully switched between cameras and outperformed three baseline methods.Comment: MICCAI 202

    Spatio-Temporal Expression Profile of Stem Cell-Associated Gene LGR5 in the Intestine during Thyroid Hormone-Dependent Metamorphosis in Xenopus laevis

    Get PDF
    The intestinal epithelium undergoes constant self-renewal throughout adult life across vertebrates. This is accomplished through the proliferation and subsequent differentiation of the adult stem cells. This self-renewal system is established in the so-called postembryonic developmental period in mammals when endogenous thyroid hormone (T3) levels are high.The T3-dependent metamorphosis in anurans like Xenopus laevis resembles the mammalian postembryonic development and offers a unique opportunity to study how the adult stem cells are developed. The tadpole intestine is predominantly a monolayer of larval epithelial cells. During metamorphosis, the larval epithelial cells undergo apoptosis and, concurrently, adult epithelial stem/progenitor cells develop de novo, rapidly proliferate, and then differentiate to establish a trough-crest axis of the epithelial fold, resembling the crypt-villus axis in the adult mammalian intestine. The leucine-rich repeat-containing G protein-coupled receptor 5 (LGR5) is a well-established stem cell marker in the adult mouse intestinal crypt. Here we have cloned and analyzed the spatiotemporal expression profile of LGR5 gene during frog metamorphosis. We show that the two duplicated LGR5 genes in Xenopus laevis and the LGR5 gene in Xenopus tropicalis are highly homologous to the LGR5 in other vertebrates. The expression of LGR5 is induced in the limb, tail, and intestine by T3 during metamorphosis. More importantly, LGR5 mRNA is localized to the developing adult epithelial stem cells of the intestine.These results suggest that LGR5-expressing cells are the stem/progenitor cells of the adult intestine and that LGR5 plays a role in the development and/or maintenance of the adult intestinal stem cells during postembryonic development in vertebrates

    Deep learning in diabetic foot ulcers detection: A comprehensive evaluation

    Get PDF
    There has been a substantial amount of research involving computer methods and technology for the detection and recognition of diabetic foot ulcers (DFUs), but there is a lack of systematic comparisons of state-of-the-art deep learning object detection frameworks applied to this problem. DFUC2020 provided participants with a comprehensive dataset consisting of 2,000 images for training and 2,000 images for testing. This paper summarizes the results of DFUC2020 by comparing the deep learning-based algorithms proposed by the winning teams: Faster R–CNN, three variants of Faster R–CNN and an ensemble method; YOLOv3; YOLOv5; EfficientDet; and a new Cascade Attention Network. For each deep learning method, we provide a detailed description of model architecture, parameter settings for training and additional stages including pre-processing, data augmentation and post-processing. We provide a comprehensive evaluation for each method. All the methods required a data augmentation stage to increase the number of images available for training and a post-processing stage to remove false positives. The best performance was obtained from Deformable Convolution, a variant of Faster R–CNN, with a mean average precision (mAP) of 0.6940 and an F1-Score of 0.7434. Finally, we demonstrate that the ensemble method based on different deep learning methods can enhance the F1-Score but not the mAP

    Hand Motion-Aware Surgical Tool Localization and Classification from an Egocentric Camera

    No full text
    Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. Unlike endoscopic surgery, the tips of the tools are often hidden in the operating field and are not captured clearly due to low camera resolution, whereas the movements of the tools and hands can be captured. As a result that the different uses of each tool require different hand movements, it is possible to use hand movement data to classify the two types of tools. We combined three modules for localization, selection, and classification, for the detection of the two tools. In the localization module, we employed the Faster R-CNN to detect surgical tools and target hands, and in the classification module, we extracted hand movement information by combining ResNet-18 and LSTM to classify two tools. We created a dataset in which seven different types of open surgery were recorded, and we provided the annotation of surgical tool detection. Our experiments show that our approach successfully detected the two different tools and outperformed the two baseline methods

    Hand Motion-Aware Surgical Tool Localization and Classification from an Egocentric Camera

    No full text
    Detecting surgical tools is an essential task for the analysis and evaluation of surgical videos. However, in open surgery such as plastic surgery, it is difficult to detect them because there are surgical tools with similar shapes, such as scissors and needle holders. Unlike endoscopic surgery, the tips of the tools are often hidden in the operating field and are not captured clearly due to low camera resolution, whereas the movements of the tools and hands can be captured. As a result that the different uses of each tool require different hand movements, it is possible to use hand movement data to classify the two types of tools. We combined three modules for localization, selection, and classification, for the detection of the two tools. In the localization module, we employed the Faster R-CNN to detect surgical tools and target hands, and in the classification module, we extracted hand movement information by combining ResNet-18 and LSTM to classify two tools. We created a dataset in which seven different types of open surgery were recorded, and we provided the annotation of surgical tool detection. Our experiments show that our approach successfully detected the two different tools and outperformed the two baseline methods

    Multi-Camera Multi-Person Tracking and Re-Identification in an Operating Room

    No full text
    Multi-camera multi-person (MCMP) tracking and re-identification (ReID) are essential tasks in safety, pedestrian analysis, and so on; however, most research focuses on outdoor scenarios because they are much more complicated to deal with occlusions and misidentification in a crowded room with obstacles. Moreover, it is challenging to complete the two tasks in one framework. We present a trajectory-based method, integrating tracking and ReID tasks. First, the poses of all surgical members captured by each camera are detected frame-by-frame; then, the detected poses are exploited to track the trajectories of all members for each camera; finally, these trajectories of different cameras are clustered to re-identify the members in the operating room across all cameras. Compared to other MCMP tracking and ReID methods, the proposed one mainly exploits trajectories, taking texture features that are less distinguishable in the operating room scenario as auxiliary cues. We also integrate temporal information during ReID, which is more reliable than the state-of-the-art framework where ReID is conducted frame-by-frame. In addition, our framework requires no training before deployment in new scenarios. We also created an annotated MCMP dataset with actual operating room videos. Our experiments prove the effectiveness of the proposed trajectory-based ReID algorithm. The proposed framework achieves 85.44% accuracy in the ReID task, outperforming the state-of-the-art framework in our operating room dataset
    corecore