503 research outputs found

    Correction of "Cloud Removal By Fusing Multi-Source and Multi-Temporal Images"

    Full text link
    Remote sensing images often suffer from cloud cover. Cloud removal is required in many applications of remote sensing images. Multitemporal-based methods are popular and effective to cope with thick clouds. This paper contributes to a summarization and experimental comparation of the existing multitemporal-based methods. Furthermore, we propose a spatiotemporal-fusion with poisson-adjustment method to fuse multi-sensor and multi-temporal images for cloud removal. The experimental results show that the proposed method has potential to address the problem of accuracy reduction of cloud removal in multi-temporal images with significant changes.Comment: This is a correction version of the accepted IGARSS 2017 conference pape

    Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition

    Full text link
    Emotion Recognition in Conversation (ERC) has been widely studied due to its importance in developing emotion-aware empathetic machines. The rise of pre-trained language models (PLMs) has further pushed the limit of ERC performance. However, most recent works on ERC using PLMs are heavily data-driven, and requires fine-tuning the entire PLMs. To improve both sample and computational efficiency, we propose a derivative-free optimization method called Cross-Task Prompt Tuning (CTPT) for few-shot conversational emotion recognition. Unlike existing methods that learn independent knowledge from individual tasks, CTPT leverages sharable cross-task knowledge by exploiting external knowledge from other source tasks to improve learning performance under the few-shot setting. Moreover, CTPT only needs to optimize a vector under the low intrinsic dimensionality without gradient, which is highly parameter-efficient compared with existing approaches. Experiments on five different contextual conversation datasets demonstrate that our CTPT method has superior results on both few-shot scenarios and zero-shot transfers.Comment: Findings of EMNLP 202

    The swimming behavior of the aquatic larva of Neoneuromus ignobilis (Megaloptera: Corydalidae: Corydalinae).

    Get PDF
    In order to explore the pattern and significance of swimming, through photos and videos we observed and recorded the swimming behavior of the aquatic larvae of Megaloptera in detail for the first time using the endemic Chinese species Neoneuromus ignobilis Navas, 1932 as the test insect, which were collected from the Dadu River and reared in nature-simulated environments. Four swimming postures are recognized and described herein in detail, i. e., vertical, parallel, back and side swimming, and these postures were used by the larvae disproportionately, with a frequency of 89.08%, 5. 49%, 4. 40% and 0. 61% , respectively. The swimming larvae tend to pose their body into an S-shape, with various degree of sinuation. By changing the directions of the head and tail, they can easily rise up or sink and change swimming postures. The propulsion was generated by the wriggling of the body while the legs were mostly held close to the body. Larvae of different instars varied greatly in swimming ability, the 6th ins tar larvae being the best and most active swimmer compared to the 2nd and final instars. The larvae may also employ complex defense behaviors not often known from relatively ancient insect groups, like chemical defense as secretion from the end of abdomen

    Image Aesthetics Assessment via Learnable Queries

    Full text link
    Image aesthetics assessment (IAA) aims to estimate the aesthetics of images. Depending on the content of an image, diverse criteria need to be selected to assess its aesthetics. Existing works utilize pre-trained vision backbones based on content knowledge to learn image aesthetics. However, training those backbones is time-consuming and suffers from attention dispersion. Inspired by learnable queries in vision-language alignment, we propose the Image Aesthetics Assessment via Learnable Queries (IAA-LQ) approach. It adapts learnable queries to extract aesthetic features from pre-trained image features obtained from a frozen image encoder. Extensive experiments on real-world data demonstrate the advantages of IAA-LQ, beating the best state-of-the-art method by 2.2% and 2.1% in terms of SRCC and PLCC, respectively.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl
    corecore