382 research outputs found

    Image Blending Algorithm with Automatic Mask Generation

    Full text link
    In recent years, image blending has gained popularity for its ability to create visually stunning content. However, the current image blending algorithms mainly have the following problems: manually creating image blending masks requires a lot of manpower and material resources; image blending algorithms cannot effectively solve the problems of brightness distortion and low resolution. To this end, we propose a new image blending method with automatic mask generation: it combines semantic object detection and segmentation with mask generation to achieve deep blended images based on our proposed new saturation loss and two-stage iteration of the PAN algorithm to fix brightness distortion and low-resolution issues. Results on publicly available datasets show that our method outperforms other classical image blending algorithms on various performance metrics, including PSNR and SSIM.Comment: 14 pages, 8 figure

    Digital archive for high-end craftsmanship processes. Comparing research paths in Italy and China

    Get PDF
    This article approaches the digitisation of the Intangible Heritage preserved by the traditional craft processes of SME manufacturing districts through a comparison of two case studies. Two distinct manufacturing areas – furniture and accessories in Tuscany (Italy) and Miao silverware in Guizhou Province (China) – are investigated in terms of their characteristics, structures and capacity to access technological innovation. The proposed analysis highlights on emerges a privileged role of the digital archive as a catalyst of creative and organisational processes. On the one hand, it favours the construction of an organised and transmissible memory that supports SMEs in their market competitiveness. On the other, it serves as a basis for developing user-experience processes aimed at dissemination and reintegration in the context of contemporary design

    Radiogenomics-Based Risk Prediction of Glioblastoma Multiforme with Clinical Relevance

    Get PDF
    Glioblastoma multiforme (GBM)is the most common and aggressive primary brain tumor. Although temozolomide (TMZ)-based radiochemotherapy improves overall GBM patients\u27 survival, it also increases the frequency of false positive post-treatment magnetic resonance imaging (MRI) assessments for tumor progression. Pseudo-progression (PsP) is a treatment-related reaction with an increased contrast-enhancing lesion size at the tumor site or resection margins miming tumor recurrence on MRI. The accurate and reliable prognostication of GBM progression is urgently needed in the clinical management of GBM patients. Clinical data analysis indicates that the patients with PsP had superior overall and progression-free survival rates. In this study, we aimed to develop a prognostic model to evaluate the tumor progression potential of GBM patients following standard therapies. We applied a dictionary learning scheme to obtain imaging features of GBM patients with PsP or true tumor progression (TTP) from the Wake dataset. Based on these radiographic features, we conducted a radiogenomics analysis to identify the significantly associated genes. These significantly associated genes were used as features to construct a 2YS (2-year survival rate) logistic regression model. GBM patients were classified into low- and high-survival risk groups based on the individual 2YS scores derived from this model. We tested our model using an independent The Cancer Genome Atlas Program (TCGA) dataset and found that 2YS scores were significantly associated with the patient\u27s overall survival. We used two cohorts of the TCGA data to train and test our model. Our results show that the 2YS scores-based classification results from the training and testing TCGA datasets were significantly associated with the overall survival of patients. We also analyzed the survival prediction ability of other clinical factors (gender, age, KPS (Karnofsky performance status), normal cell ratio) and found that these factors were unrelated or weakly correlated with patients\u27 survival. Overall, our studies have demonstrated the effectiveness and robustness of the 2YS model in predicting the clinical outcomes of GBM patients after standard therapies

    Identification of potential shared gene signatures between gastric cancer and type 2 diabetes: a data-driven analysis

    Get PDF
    BackgroundGastric cancer (GC) and type 2 diabetes (T2D) contribute to each other, but the interaction mechanisms remain undiscovered. The goal of this research was to explore shared genes as well as crosstalk mechanisms between GC and T2D.MethodsThe Gene Expression Omnibus (GEO) database served as the source of the GC and T2D datasets. The differentially expressed genes (DEGs) and weighted gene co-expression network analysis (WGCNA) were utilized to identify representative genes. In addition, overlapping genes between the representative genes of the two diseases were used for functional enrichment analysis and protein–protein interaction (PPI) network. Next, hub genes were filtered through two machine learning algorithms. Finally, external validation was undertaken with data from the Cancer Genome Atlas (TCGA) database.ResultsA total of 292 and 541 DEGs were obtained from the GC (GSE29272) and T2D (GSE164416) datasets, respectively. In addition, 2,704 and 336 module genes were identified in GC and T2D. Following their intersection, 104 crosstalk genes were identified. Enrichment analysis indicated that “ECM-receptor interaction,” “AGE-RAGE signaling pathway in diabetic complications,” “aging,” and “cellular response to copper ion” were mutual pathways. Through the PPI network, 10 genes were identified as candidate hub genes. Machine learning further selected BGN, VCAN, FN1, FBLN1, COL4A5, COL1A1, and COL6A3 as hub genes.Conclusion“ECM-receptor interaction,” “AGE-RAGE signaling pathway in diabetic complications,” “aging,” and “cellular response to copper ion” were revealed as possible crosstalk mechanisms. BGN, VCAN, FN1, FBLN1, COL4A5, COL1A1, and COL6A3 were identified as shared genes and potential therapeutic targets for people suffering from GC and T2D

    TransVCL: Attention-enhanced Video Copy Localization Network with Flexible Supervision

    Full text link
    Video copy localization aims to precisely localize all the copied segments within a pair of untrimmed videos in video retrieval applications. Previous methods typically start from frame-to-frame similarity matrix generated by cosine similarity between frame-level features of the input video pair, and then detect and refine the boundaries of copied segments on similarity matrix under temporal constraints. In this paper, we propose TransVCL: an attention-enhanced video copy localization network, which is optimized directly from initial frame-level features and trained end-to-end with three main components: a customized Transformer for feature enhancement, a correlation and softmax layer for similarity matrix generation, and a temporal alignment module for copied segments localization. In contrast to previous methods demanding the handcrafted similarity matrix, TransVCL incorporates long-range temporal information between feature sequence pair using self- and cross- attention layers. With the joint design and optimization of three components, the similarity matrix can be learned to present more discriminative copied patterns, leading to significant improvements over previous methods on segment-level labeled datasets (VCSL and VCDB). Besides the state-of-the-art performance in fully supervised setting, the attention architecture facilitates TransVCL to further exploit unlabeled or simply video-level labeled data. Additional experiments of supplementing video-level labeled datasets including SVD and FIVR reveal the high flexibility of TransVCL from full supervision to semi-supervision (with or without video-level annotation). Code is publicly available at https://github.com/transvcl/TransVCL.Comment: Accepted by the Thirty-Seventh AAAI Conference on Artificial Intelligence(AAAI2023
    • …
    corecore