188 research outputs found

    S2WAT: Image Style Transfer via Hierarchical Vision Transformer using Strips Window Attention

    Full text link
    This paper presents a new hierarchical vision Transformer for image style transfer, called Strips Window Attention Transformer (S2WAT), which serves as an encoder of encoder-transfer-decoder architecture. With hierarchical features, S2WAT can leverage proven techniques in other fields of computer vision, such as feature pyramid networks (FPN) or U-Net, to image style transfer in future works. However, the existing window-based Transformers will cause a problem that the stylized images will be grid-like when introduced into image style transfer directly. To solve this problem, we propose S2WAT whose representation is computed with Strips Window Attention (SpW Attention). The SpW Attention can integrate both local information and long-range dependencies in horizontal and vertical directions by a novel feature fusion scheme named Attn Merge. Qualitative and quantitative experiments demonstrate that S2WAT achieves comparable performance to state-of-the-art CNN-based, Flow-based, and Transformer-based approaches. The code and models are available at https://github.com/AlienZhang1996/S2WAT

    What Constitutes Good Contrastive Learning in Time-Series Forecasting?

    Full text link
    In recent years, the introduction of self-supervised contrastive learning (SSCL) has demonstrated remarkable improvements in representation learning across various domains, including natural language processing and computer vision. By leveraging the inherent benefits of self-supervision, SSCL enables the pre-training of representation models using vast amounts of unlabeled data. Despite these advances, there remains a significant gap in understanding the impact of different SSCL strategies on time series forecasting performance, as well as the specific benefits that SSCL can bring. This paper aims to address these gaps by conducting a comprehensive analysis of the effectiveness of various training variables, including different SSCL algorithms, learning strategies, model architectures, and their interplay. Additionally, to gain deeper insights into the improvements brought about by SSCL in the context of time-series forecasting, a qualitative analysis of the empirical receptive field is performed. Through our experiments, we demonstrate that the end-to-end training of a Transformer model using the Mean Squared Error (MSE) loss and SSCL emerges as the most effective approach in time series forecasting. Notably, the incorporation of the contrastive objective enables the model to prioritize more pertinent information for forecasting, such as scale and periodic relationships. These findings contribute to a better understanding of the benefits of SSCL in time series forecasting and provide valuable insights for future research in this area. Our codes are available at https://github.com/chiyuzhang94/contrastive_learning_time-series_e2e.Comment: Accepted at IJCAI'22 Workshop-AI4TS: AI for Time Series Analysi

    How to report and make sense of a new HIV-1 circulating recombinant form?

    Get PDF
    Co-circulation of multiple HIV-1 subtypes in the same high-risk groups leads to the on-going generation of various inter-subtype recombinants, including unique (URFs) and circulating (CRFs) recombinant forms, which brings a new challenge for the prevention and eradication of HIV/AIDS. Identification and prompt reporting of new CRFs will provide not only new insights into the understanding of genetic diversity and evolution of HIV-1, but also an early warning of potential prevalence of these variants. Currently, 140 HIV-1 CRFs have been described; however, their prevalence and clinical importance are less concerned. Apart from the mosaic genomic maps, less other valuable information, including the clinical and demographic data, genomic sequence characteristics, origin and evolutionary dynamics, as well as representative genomic fragments for determining the variants, are available for most of these CRFs. Accompanied with the growing increase of HIV-1 full-length genomic sequences, more and more CRFs will be identified in the near future due to the high recombination potential of HIV-1. Here, we discuss the prevalence and clinical importance of various HIV-1 CRFs and propose how to report and make sense of a new HIV-1 CRF

    The Skipped Beat: A Study of Sociopragmatic Understanding in LLMs for 64 Languages

    Full text link
    Instruction tuned large language models (LLMs), such as ChatGPT, demonstrate remarkable performance in a wide range of tasks. Despite numerous recent studies that examine the performance of instruction-tuned LLMs on various NLP benchmarks, there remains a lack of comprehensive investigation into their ability to understand cross-lingual sociopragmatic meaning (SM), i.e., meaning embedded within social and interactive contexts. This deficiency arises partly from SM not being adequately represented in any of the existing benchmarks. To address this gap, we present SPARROW, an extensive multilingual benchmark specifically designed for SM understanding. SPARROW comprises 169 datasets covering 13 task types across six primary categories (e.g., anti-social language detection, emotion recognition). SPARROW datasets encompass 64 different languages originating from 12 language families representing 16 writing scripts. We evaluate the performance of various multilingual pretrained language models (e.g., mT5) and instruction-tuned LLMs (e.g., BLOOMZ, ChatGPT) on SPARROW through fine-tuning, zero-shot, and/or few-shot learning. Our comprehensive analysis reveals that existing open-source instruction tuned LLMs still struggle to understand SM across various languages, performing close to a random baseline in some cases. We also find that although ChatGPT outperforms many LLMs, it still falls behind task-specific finetuned models with a gap of 12.19 SPARROW score. Our benchmark is available at: https://github.com/UBC-NLP/SPARROWComment: Accepted by EMNLP 2023 Main conferenc
    • …
    corecore