951 research outputs found

    Weighted-Sampling Audio Adversarial Example Attack

    Full text link
    Recent studies have highlighted audio adversarial examples as a ubiquitous threat to state-of-the-art automatic speech recognition systems. Thorough studies on how to effectively generate adversarial examples are essential to prevent potential attacks. Despite many research on this, the efficiency and the robustness of existing works are not yet satisfactory. In this paper, we propose~\textit{weighted-sampling audio adversarial examples}, focusing on the numbers and the weights of distortion to reinforce the attack. Further, we apply a denoising method in the loss function to make the adversarial attack more imperceptible. Experiments show that our method is the first in the field to generate audio adversarial examples with low noise and high audio robustness at the minute time-consuming level.Comment: https://aaai.org/Papers/AAAI/2020GB/AAAI-LiuXL.9260.pd

    Learning Audio Sequence Representations for Acoustic Event Classification

    Full text link
    Acoustic Event Classification (AEC) has become a significant task for machines to perceive the surrounding auditory scene. However, extracting effective representations that capture the underlying characteristics of the acoustic events is still challenging. Previous methods mainly focused on designing the audio features in a 'hand-crafted' manner. Interestingly, data-learnt features have been recently reported to show better performance. Up to now, these were only considered on the frame-level. In this paper, we propose an unsupervised learning framework to learn a vector representation of an audio sequence for AEC. This framework consists of a Recurrent Neural Network (RNN) encoder and a RNN decoder, which respectively transforms the variable-length audio sequence into a fixed-length vector and reconstructs the input sequence on the generated vector. After training the encoder-decoder, we feed the audio sequences to the encoder and then take the learnt vectors as the audio sequence representations. Compared with previous methods, the proposed method can not only deal with the problem of arbitrary-lengths of audio streams, but also learn the salient information of the sequence. Extensive evaluation on a large-size acoustic event database is performed, and the empirical results demonstrate that the learnt audio sequence representation yields a significant performance improvement by a large margin compared with other state-of-the-art hand-crafted sequence features for AEC

    Full-range Gate-controlled Terahertz Phase Modulations with Graphene Metasurfaces

    Full text link
    Local phase control of electromagnetic wave, the basis of a diverse set of applications such as hologram imaging, polarization and wave-front manipulation, is of fundamental importance in photonic research. However, the bulky, passive phase modulators currently available remain a hurdle for photonic integration. Here we demonstrate full-range active phase modulations in the Tera-Hertz (THz) regime, realized by gate-tuned ultra-thin reflective metasurfaces based on graphene. A one-port resonator model, backed by our full-wave simulations, reveals the underlying mechanism of our extreme phase modulations, and points to general strategies for the design of tunable photonic devices. As a particular example, we demonstrate a gate-tunable THz polarization modulator based on our graphene metasurface. Our findings pave the road towards exciting photonic applications based on active phase manipulations
    • …
    corecore