45,177 research outputs found
Robust Raw Waveform Speech Recognition Using Relevance Weighted Representations
Speech recognition in noisy and channel distorted scenarios is often
challenging as the current acoustic modeling schemes are not adaptive to the
changes in the signal distribution in the presence of noise. In this work, we
develop a novel acoustic modeling framework for noise robust speech recognition
based on relevance weighting mechanism. The relevance weighting is achieved
using a sub-network approach that performs feature selection. A relevance
sub-network is applied on the output of first layer of a convolutional network
model operating on raw speech signals while a second relevance sub-network is
applied on the second convolutional layer output. The relevance weights for the
first layer correspond to an acoustic filterbank selection while the relevance
weights in the second layer perform modulation filter selection. The model is
trained for a speech recognition task on noisy and reverberant speech. The
speech recognition experiments on multiple datasets (Aurora-4, CHiME-3, VOiCES)
reveal that the incorporation of relevance weighting in the neural network
architecture improves the speech recognition word error rates significantly
(average relative improvements of 10% over the baseline systems)Comment: arXiv admin note: text overlap with arXiv:2001.0706
Band-pass filtering of the time sequences of spectral parameters for robust wireless speech recognition
In this paper we address the problem of automatic speech recognition when wireless speech communication systems are involved. In this context, three main sources of distortion should be considered: acoustic environment, speech coding and transmission errors. Whilst the first one has already received a lot of attention, the last two deserve further investigation in our opinion. We have found out that band-pass filtering of the recognition features improves ASR performance when distortions due to these particular communication systems are present. Furthermore, we have evaluated two alternative configurations at different bit error rates (BER) typical of these channels: band-pass filtering the LP-MFCC parameters or a modification of the RASTA-PLP using a sharper low-pass section perform consistently better than LP-MFCC and RASTA-PLP, respectively.Publicad
DolphinAtack: Inaudible Voice Commands
Speech recognition (SR) systems such as Siri or Google Now have become an
increasingly popular human-computer interaction method, and have turned various
systems into voice controllable systems(VCS). Prior work on attacking VCS shows
that the hidden voice commands that are incomprehensible to people can control
the systems. Hidden voice commands, though hidden, are nonetheless audible. In
this work, we design a completely inaudible attack, DolphinAttack, that
modulates voice commands on ultrasonic carriers (e.g., f > 20 kHz) to achieve
inaudibility. By leveraging the nonlinearity of the microphone circuits, the
modulated low frequency audio commands can be successfully demodulated,
recovered, and more importantly interpreted by the speech recognition systems.
We validate DolphinAttack on popular speech recognition systems, including
Siri, Google Now, Samsung S Voice, Huawei HiVoice, Cortana and Alexa. By
injecting a sequence of inaudible voice commands, we show a few
proof-of-concept attacks, which include activating Siri to initiate a FaceTime
call on iPhone, activating Google Now to switch the phone to the airplane mode,
and even manipulating the navigation system in an Audi automobile. We propose
hardware and software defense solutions. We validate that it is feasible to
detect DolphinAttack by classifying the audios using supported vector machine
(SVM), and suggest to re-design voice controllable systems to be resilient to
inaudible voice command attacks.Comment: 15 pages, 17 figure
Does training with amplitude modulated tones affect tone-vocoded speech perception?
Temporal-envelope cues are essential for successful speech perception. We asked here whether training on stimuli containing temporal-envelope cues without speech content can improve the perception of spectrally-degraded (vocoded) speech in which the temporal-envelope (but not the temporal fine structure) is mainly preserved. Two groups of listeners were trained on different amplitude-modulation (AM) based tasks, either AM detection or AM-rate discrimination (21 blocks of 60 trials during two days, 1260 trials; frequency range: 4Hz, 8Hz, and 16Hz), while an additional control group did not undertake any training. Consonant identification in vocoded vowel-consonant-vowel stimuli was tested before and after training on the AM tasks (or at an equivalent time interval for the control group). Following training, only the trained groups showed a significant improvement in the perception of vocoded speech, but the improvement did not significantly differ from that observed for controls. Thus, we do not find convincing evidence that this amount of training with temporal-envelope cues without speech content provide significant benefit for vocoded speech intelligibility. Alternative training regimens using vocoded speech along the linguistic hierarchy should be explored
- …