503 research outputs found
Correction of "Cloud Removal By Fusing Multi-Source and Multi-Temporal Images"
Remote sensing images often suffer from cloud cover. Cloud removal is
required in many applications of remote sensing images. Multitemporal-based
methods are popular and effective to cope with thick clouds. This paper
contributes to a summarization and experimental comparation of the existing
multitemporal-based methods. Furthermore, we propose a spatiotemporal-fusion
with poisson-adjustment method to fuse multi-sensor and multi-temporal images
for cloud removal. The experimental results show that the proposed method has
potential to address the problem of accuracy reduction of cloud removal in
multi-temporal images with significant changes.Comment: This is a correction version of the accepted IGARSS 2017 conference
pape
Efficient Cross-Task Prompt Tuning for Few-Shot Conversational Emotion Recognition
Emotion Recognition in Conversation (ERC) has been widely studied due to its
importance in developing emotion-aware empathetic machines. The rise of
pre-trained language models (PLMs) has further pushed the limit of ERC
performance. However, most recent works on ERC using PLMs are heavily
data-driven, and requires fine-tuning the entire PLMs. To improve both sample
and computational efficiency, we propose a derivative-free optimization method
called Cross-Task Prompt Tuning (CTPT) for few-shot conversational emotion
recognition. Unlike existing methods that learn independent knowledge from
individual tasks, CTPT leverages sharable cross-task knowledge by exploiting
external knowledge from other source tasks to improve learning performance
under the few-shot setting. Moreover, CTPT only needs to optimize a vector
under the low intrinsic dimensionality without gradient, which is highly
parameter-efficient compared with existing approaches. Experiments on five
different contextual conversation datasets demonstrate that our CTPT method has
superior results on both few-shot scenarios and zero-shot transfers.Comment: Findings of EMNLP 202
Recommended from our members
N-Acetyl and Glutamatergic Neurometabolites in Perisylvian Brain Regions of Methamphetamine Users.
Background:Methamphetamine induces neuronal N-acetyl-aspartate synthesis in preclinical studies. In a preliminary human proton magnetic resonance spectroscopic imaging investigation, we also observed that N-acetyl-aspartate+N-acetyl-aspartyl-glutamate in right inferior frontal cortex correlated with years of heavy methamphetamine abuse. In the same brain region, glutamate+glutamine is lower in methamphetamine users than in controls and is negatively correlated with depression. N-acetyl and glutamatergic neurochemistries therefore merit further investigation in methamphetamine abuse and the associated mood symptoms. Methods:Magnetic resonance spectroscopic imaging was used to measure N-acetyl-aspartate+N-acetyl-aspartyl-glutamate and glutamate+glutamine in bilateral inferior frontal cortex and insula, a neighboring perisylvian region affected by methamphetamine, of 45 abstinent methamphetamine-dependent and 45 healthy control participants. Regional neurometabolite levels were tested for group differences and associations with duration of heavy methamphetamine use, depressive symptoms, and state anxiety. Results:In right inferior frontal cortex, N-acetyl-aspartate+N-acetyl-aspartyl-glutamate correlated with years of heavy methamphetamine use (r = +0.45); glutamate+glutamine was lower in methamphetamine users than in controls (9.3%) and correlated negatively with depressive symptoms (r = -0.44). In left insula, N-acetyl-aspartate+N-acetyl-aspartyl-glutamate was 9.1% higher in methamphetamine users than controls. In right insula, glutamate+glutamine was 12.3% lower in methamphetamine users than controls and correlated negatively with depressive symptoms (r = -0.51) and state anxiety (r = -0.47). Conclusions:The inferior frontal cortex and insula show methamphetamine-related abnormalities, consistent with prior observations of increased cortical N-acetyl-aspartate in methamphetamine-exposed animal models and associations between cortical glutamate and mood in human methamphetamine users
The swimming behavior of the aquatic larva of Neoneuromus ignobilis (Megaloptera: Corydalidae: Corydalinae).
In order to explore the pattern and significance of swimming, through photos and videos we observed and recorded the swimming behavior of the aquatic larvae of Megaloptera in detail for the first time using the endemic Chinese species Neoneuromus ignobilis Navas, 1932 as the test insect, which were collected from the Dadu River and reared in nature-simulated environments. Four swimming postures are recognized and described herein in detail, i. e., vertical, parallel, back and side swimming, and these postures were used by the larvae disproportionately, with a frequency of 89.08%, 5. 49%, 4. 40% and 0. 61% , respectively. The swimming larvae tend to pose their body into an S-shape, with various degree of sinuation. By changing the directions of the head and tail, they can easily rise up or sink and change swimming postures. The propulsion was generated by the wriggling of the body while the legs were mostly held close to the body. Larvae of different instars varied greatly in swimming ability, the 6th ins tar larvae being the best and most active swimmer compared to the 2nd and final instars. The larvae may also employ complex defense behaviors not often known from relatively ancient insect groups, like chemical defense as secretion from the end of abdomen
Image Aesthetics Assessment via Learnable Queries
Image aesthetics assessment (IAA) aims to estimate the aesthetics of images.
Depending on the content of an image, diverse criteria need to be selected to
assess its aesthetics. Existing works utilize pre-trained vision backbones
based on content knowledge to learn image aesthetics. However, training those
backbones is time-consuming and suffers from attention dispersion. Inspired by
learnable queries in vision-language alignment, we propose the Image Aesthetics
Assessment via Learnable Queries (IAA-LQ) approach. It adapts learnable queries
to extract aesthetic features from pre-trained image features obtained from a
frozen image encoder. Extensive experiments on real-world data demonstrate the
advantages of IAA-LQ, beating the best state-of-the-art method by 2.2% and 2.1%
in terms of SRCC and PLCC, respectively.Comment: This work has been submitted to the IEEE for possible publication.
Copyright may be transferred without notice, after which this version may no
longer be accessibl
- …