12,757 research outputs found
Impacts of natural factors and farming practices on greenhouse gas emissions in the North China Plain : A meta-analysis
This work received support from the National Science and Technology Support Program (No. 2012BAD14B01), the National 948 Project (No. 2011-G30), and the Non-profit Research Foundation for Agriculture (201103039). Thanks are expressed to the anonymous reviewers for their helpful comments and suggestions that greatly improved the manuscript. The authors declare that they have no competing interests.Peer reviewedPublisher PD
Universal Adversarial Perturbations for CNN Classifiers in EEG-Based BCIs
Multiple convolutional neural network (CNN) classifiers have been proposed
for electroencephalogram (EEG) based brain-computer interfaces (BCIs). However,
CNN models have been found vulnerable to universal adversarial perturbations
(UAPs), which are small and example-independent, yet powerful enough to degrade
the performance of a CNN model, when added to a benign example. This paper
proposes a novel total loss minimization (TLM) approach to generate UAPs for
EEG-based BCIs. Experimental results demonstrated the effectiveness of TLM on
three popular CNN classifiers for both target and non-target attacks. We also
verified the transferability of UAPs in EEG-based BCI systems. To our
knowledge, this is the first study on UAPs of CNN classifiers in EEG-based
BCIs. UAPs are easy to construct, and can attack BCIs in real-time, exposing a
potentially critical security concern of BCIs
Detaching and Boosting: Dual Engine for Scale-Invariant Self-Supervised Monocular Depth Estimation
Monocular depth estimation (MDE) in the self-supervised scenario has emerged
as a promising method as it refrains from the requirement of ground truth
depth. Despite continuous efforts, MDE is still sensitive to scale changes
especially when all the training samples are from one single camera. Meanwhile,
it deteriorates further since camera movement results in heavy coupling between
the predicted depth and the scale change. In this paper, we present a
scale-invariant approach for self-supervised MDE, in which scale-sensitive
features (SSFs) are detached away while scale-invariant features (SIFs) are
boosted further. To be specific, a simple but effective data augmentation by
imitating the camera zooming process is proposed to detach SSFs, making the
model robust to scale changes. Besides, a dynamic cross-attention module is
designed to boost SIFs by fusing multi-scale cross-attention features
adaptively. Extensive experiments on the KITTI dataset demonstrate that the
detaching and boosting strategies are mutually complementary in MDE and our
approach achieves new State-of-The-Art performance against existing works from
0.097 to 0.090 w.r.t absolute relative error. The code will be made public
soon.Comment: Accepted by IEEE Robotics and Automation Letters (RAL
Relative entropy of entanglement of a kind of two qubit entangled states
We in this paper strictly prove that some block diagonalizable two qubit
entangled state with six none zero elements reaches its quantum relative
entropy entanglement by the a separable state having the same matrix structure.
The entangled state comprises local filtering result state as a special case.Comment: 5 page
QS-TTS: Towards Semi-Supervised Text-to-Speech Synthesis via Vector-Quantized Self-Supervised Speech Representation Learning
This paper proposes a novel semi-supervised TTS framework, QS-TTS, to improve
TTS quality with lower supervised data requirements via Vector-Quantized
Self-Supervised Speech Representation Learning (VQ-S3RL) utilizing more
unlabeled speech audio. This framework comprises two VQ-S3R learners: first,
the principal learner aims to provide a generative Multi-Stage Multi-Codebook
(MSMC) VQ-S3R via the MSMC-VQ-GAN combined with the contrastive S3RL, while
decoding it back to the high-quality audio; then, the associate learner further
abstracts the MSMC representation into a highly-compact VQ representation
through a VQ-VAE. These two generative VQ-S3R learners provide profitable
speech representations and pre-trained models for TTS, significantly improving
synthesis quality with the lower requirement for supervised data. QS-TTS is
evaluated comprehensively under various scenarios via subjective and objective
tests in experiments. The results powerfully demonstrate the superior
performance of QS-TTS, winning the highest MOS over supervised or
semi-supervised baseline TTS approaches, especially in low-resource scenarios.
Moreover, comparing various speech representations and transfer learning
methods in TTS further validates the notable improvement of the proposed
VQ-S3RL to TTS, showing the best audio quality and intelligibility metrics. The
trend of slower decay in the synthesis quality of QS-TTS with decreasing
supervised data further highlights its lower requirements for supervised data,
indicating its great potential in low-resource scenarios
Reinforcement-Enhanced Autoregressive Feature Transformation: Gradient-steered Search in Continuous Space for Postfix Expressions
Feature transformation aims to generate new pattern-discriminative feature
space from original features to improve downstream machine learning (ML) task
performances. However, the discrete search space for the optimal feature
explosively grows on the basis of combinations of features and operations from
low-order forms to high-order forms. Existing methods, such as exhaustive
search, expansion reduction, evolutionary algorithms, reinforcement learning,
and iterative greedy, suffer from large search space. Overly emphasizing
efficiency in algorithm design usually sacrifices stability or robustness. To
fundamentally fill this gap, we reformulate discrete feature transformation as
a continuous space optimization task and develop an
embedding-optimization-reconstruction framework. This framework includes four
steps: 1) reinforcement-enhanced data preparation, aiming to prepare
high-quality transformation-accuracy training data; 2) feature transformation
operation sequence embedding, intending to encapsulate the knowledge of
prepared training data within a continuous space; 3) gradient-steered optimal
embedding search, dedicating to uncover potentially superior embeddings within
the learned space; 4) transformation operation sequence reconstruction,
striving to reproduce the feature transformation solution to pinpoint the
optimal feature space.Comment: Accepted by NeurIPS 202
- ā¦