678 research outputs found

    Recurrent Multimodal Interaction for Referring Image Segmentation

    Get PDF
    In this paper we are interested in the problem of image segmentation given natural language descriptions, i.e. referring expressions. Existing works tackle this problem by first modeling images and sentences independently and then segment images by combining these two types of representations. We argue that learning word-to-image interaction is more native in the sense of jointly modeling two modalities for the image segmentation task, and we propose convolutional multimodal LSTM to encode the sequential interactions between individual words, visual information, and spatial information. We show that our proposed model outperforms the baseline model on benchmark datasets. In addition, we analyze the intermediate output of the proposed multimodal LSTM approach and empirically explain how this approach enforces a more effective word-to-image interaction.Comment: To appear in ICCV 2017. See http://www.cs.jhu.edu/~cxliu/ for code and supplementary materia

    Deep Image Harmonization

    Full text link
    Compositing is one of the most common operations in photo editing. To generate realistic composites, the appearances of foreground and background need to be adjusted to make them compatible. Previous approaches to harmonize composites have focused on learning statistical relationships between hand-crafted appearance features of the foreground and background, which is unreliable especially when the contents in the two layers are vastly different. In this work, we propose an end-to-end deep convolutional neural network for image harmonization, which can capture both the context and semantic information of the composite images during harmonization. We also introduce an efficient way to collect large-scale and high-quality training data that can facilitate the training process. Experiments on the synthesized dataset and real composite images show that the proposed network outperforms previous state-of-the-art methods

    Association between vitamin D and systemic lupus erythematosus disease activity index in children and adolescents: A systematic review and meta-analysis

    Get PDF
    Purpose: To undertake a systematic and a meta-analysis in order to determine whether vitamin D is relevant to systemic lupus erythematosus (SLE) in children and adolescents. Methods: PubMed, Embase, Medline, and Cochrane Library were systematically searched from January 1, 1979 to December 30, 2018. Cross-sectional studies were conducted to compare vitamin D, systemic lupus erythematosus disease activity index (SLEDAI), parathormone (PTH), and calcium between children and adolescents with SLE and healthy children and adolescents. The primary outcomes were the vitamin D level and SLEDAI, whereas the secondary outcomes were vitamin D level, vitamin D deficiency level, PTH, and calcium. Results: A total of 98 articles were obtained, among which 7 studies met the inclusion criteria. The results indicate that serum vitamin D level in SLE group was lower than that in the healthy group. Patients with SLE were more vulnerable to vitamin D deficiency than the healthy group. However, correlation analysis indicate that vitamin D level was poorly correlated with SLEDAI (r = -0.04). Subgroup analysis of latitude and economic status was conducted. However, no correlation was indicated. PTH level was higher (p = 0.45), but calcium level was lower in patients with SLE than in healthy controls (p = 0.003). The correlation study indicated a poorly negative correlation between vitamin D and calcium (r = -0.09, p = 0.90), and negative correlation between vitamin D and PTH (r = - 0.44, p = 0.26). Conclusion: The results of this meta-analysis suggest that serum vitamin D level does not exhibit any correlation with SLEDAI

    Optron: Better Medical Image Registration via Optimizing in the Loop

    Full text link
    Previously, in the field of image registration, there are mainly two paradigms, the traditional optimization-based methods, and the deep-learning-based methods. We designed a robust training architecture that is simple and generalizable. We present Optron, a general training architecture incorporating the idea of optimizing-in-the-loop. By iteratively optimizing the prediction result of a deep learning model through a plug-and-play optimizer module in the training loop, Optron introduces pseudo ground truth to an unsupervised training process. This pseudo supervision provides more direct guidance towards model training compared with unsupervised methods. Utilizing this advantage, Optron can consistently improve the models' performance and convergence speed. We evaluated our method on various combinations of models and datasets, and we have achieved state-of-the-art performance on the IXI dataset, improving the previous state-of-the-art method TransMorph by a significant margin of +1.6% DSC. Moreover, Optron also consistently achieved positive results with other models and datasets. It increases the validation DSC on IXI for VoxelMorph and ViT-V-Net by +2.3% and +2.2% respectively, demonstrating our method's generalizability. Our implementation is publicly available at https://github.com/miraclefactory/optronComment: 10 pages, 5 figures, 4 table
    • …
    corecore