115 research outputs found

    Whole-Body Lesion Segmentation in 18F-FDG PET/CT

    Full text link
    There has been growing research interest in using deep learning based method to achieve fully automated segmentation of lesion in Positron emission tomography computed tomography(PET CT) scans for the prognosis of various cancers. Recent advances in the medical image segmentation shows the nnUNET is feasible for diverse tasks. However, lesion segmentation in the PET images is not straightforward, because lesion and physiological uptake has similar distribution patterns. The Distinction of them requires extra structural information in the CT images. The present paper introduces a nnUNet based method for the lesion segmentation task. The proposed model is designed on the basis of the joint 2D and 3D nnUNET architecture to predict lesions across the whole body. It allows for automated segmentation of potential lesions. We evaluate the proposed method in the context of AutoPet Challenge, which measures the lesion segmentation performance in the metrics of dice score, false-positive volume and false-negative volume

    Image Deblurring According to Facially Recognized Locations Within the Image

    Get PDF
    This publication describes techniques for image deblurring according to a facially recognized locations within the image. An algorithm may use facial detection and recognition to selectively sharpen aspects of faces within an image and the surrounding area associated with the facial detection. In one or more aspects, the selectivity of sharpening improves the computational load and other aspects of image provision to improve overall computer function, power consumption, and user experience. Individual faces within the image may be cropped or thumbnailed, providing portions of the image that include the faces. Counterpart images associated with the individual faces may be found within a database having a repository of sharp features associated with the counterpart images. As such, the features may be integrated with the blurred faces of the original image to sharpen an image output

    Boosting Image-based Mutual Gaze Detection using Pseudo 3D Gaze

    Full text link
    Mutual gaze detection, i.e., predicting whether or not two people are looking at each other, plays an important role in understanding human interactions. In this work, we focus on the task of image-based mutual gaze detection, and propose a simple and effective approach to boost the performance by using an auxiliary 3D gaze estimation task during the training phase. We achieve the performance boost without additional labeling cost by training the 3D gaze estimation branch using pseudo 3D gaze labels deduced from mutual gaze labels. By sharing the head image encoder between the 3D gaze estimation and the mutual gaze detection branches, we achieve better head features than learned by training the mutual gaze detection branch alone. Experimental results on three image datasets show that the proposed approach improves the detection performance significantly without additional annotations. This work also introduces a new image dataset that consists of 33.1K pairs of humans annotated with mutual gaze labels in 29.2K images

    Ranking Neural Checkpoints

    Full text link
    This paper is concerned with ranking many pre-trained deep neural networks (DNNs), called checkpoints, for the transfer learning to a downstream task. Thanks to the broad use of DNNs, we may easily collect hundreds of checkpoints from various sources. Which of them transfers the best to our downstream task of interest? Striving to answer this question thoroughly, we establish a neural checkpoint ranking benchmark (NeuCRaB) and study some intuitive ranking measures. These measures are generic, applying to the checkpoints of different output types without knowing how the checkpoints are pre-trained on which dataset. They also incur low computation cost, making them practically meaningful. Our results suggest that the linear separability of the features extracted by the checkpoints is a strong indicator of transferability. We also arrive at a new ranking measure, NLEEP, which gives rise to the best performance in the experiments.Comment: Accepted to CVPR 202

    Bankruptcy prediction with financial systemic risk

    Get PDF
    Financial systemic risk – defined as the risk of collapse of an entire financial system vis-à-vis any one individual financial institution – is making inroads into academic research in the aftermath of the late 2000s Global Financial Crisis. We shed light on this new concept by investigating the value of various systemic financial risk measures in the corporate failure predictions of listed nonfinancial firms. Our sample includes 225,813 firm-quarter observations covering 8,604 US firms from 2000 Q1 to 2016 Q4. We find that financial systemic risk is incrementally useful in forecasting corporate failure over and above the predictions of the traditional accounting-based and market-based factors. Our results are stronger when the firm in consideration has higher equity volatility relative to financial sector volatility, smaller size relative to the market, and more debts in current liabilities. The combined evidence suggests that systemic risk is a useful supplementary source of information in capital markets

    Active and inactive microaneurysms identified and characterized by structural and angiographic optical coherence tomography

    Full text link
    Purpose: To characterize flow status within microaneurysms (MAs) and quantitatively investigate their relations with regional macular edema in diabetic retinopathy (DR). Design: Retrospective, cross-sectional study. Participants: A total of 99 participants, including 23 with mild nonproliferative DR (NPDR), 25 with moderate NPDR, 34 with severe NPDR, 17 with proliferative DR. Methods: In this study, 3x3-mm optical coherence tomography (OCT) and OCT angiography (OCTA) scans with a 400x400 sampling density from one eye of each participant were obtained using a commercial OCT system. Trained graders manually identified MAs and their location relative to the anatomic layers from cross-sectional OCT. Microaneurysms were first classified as active if the flow signal was present in the OCTA channel. Then active MAs were further classified into fully active and partially active MAs based on the flow perfusion status of MA on en face OCTA. The presence of retinal fluid near MAs was compared between active and inactive types. We also compared OCT-based MA detection to fundus photography (FP) and fluorescein angiography (FA)-based detection. Results: We identified 308 MAs (166 fully active, 88 partially active, 54 inactive) in 42 eyes using OCT and OCTA. Nearly half of the MAs identified straddle the inner nuclear layer and outer plexiform layer. Compared to partially active and inactive MAs, fully active MAs were more likely to be associated with local retinal fluid. The associated fluid volumes were larger with fully active MAs than with partially active and inactive MAs. OCT/OCTA detected all MAs found on FP. While not all MAs seen with FA were identified with OCT, some MAs seen with OCT were not visible with FA or FP. Conclusions: Co-registered OCT and OCTA can characterize MA activities, which could be a new means to study diabetic macular edema pathophysiology
    • …
    corecore