82 research outputs found
The Microsoft 2017 Conversational Speech Recognition System
We describe the 2017 version of Microsoft's conversational speech recognition
system, in which we update our 2016 system with recent developments in
neural-network-based acoustic and language modeling to further advance the
state of the art on the Switchboard speech recognition task. The system adds a
CNN-BLSTM acoustic model to the set of model architectures we combined
previously, and includes character-based and dialog session aware LSTM language
models in rescoring. For system combination we adopt a two-stage approach,
whereby subsets of acoustic models are first combined at the senone/frame
level, followed by a word-level voting via confusion networks. We also added a
confusion network rescoring step after system combination. The resulting system
yields a 5.1\% word error rate on the 2000 Switchboard evaluation set
Two-Dimensional Convolutional Recurrent Neural Networks for Speech Activity Detection
Speech Activity Detection (SAD) plays an important role in mobile communications and automatic speech recognition (ASR). Developing efficient SAD systems for real-world applications is a challenging task due to the presence of noise. We propose a new approach to SAD where we treat it as a two-dimensional multilabel image classification problem. To classify the audio segments, we compute their Short-time Fourier Transform spectrograms and classify them with a Convolutional Recurrent Neural Network (CRNN), traditionally used in image recognition. Our CRNN uses a sigmoid activation function, max-pooling in the frequency domain, and a convolutional operation as a moving average filter to remove misclassified spikes. On the development set of Task 1 of the 2019 Fearless Steps Challenge, our system achieved a decision cost function (DCF) of 2.89%, a 66.4% improvement over the baseline. Moreover, it achieved a DCF score of 3.318% on the evaluation dataset of the challenge, ranking first among all submissions
Convolutional neural network for breathing phase detection in lung sounds
We applied deep learning to create an algorithm for breathing phase detection
in lung sound recordings, and we compared the breathing phases detected by the
algorithm and manually annotated by two experienced lung sound researchers. Our
algorithm uses a convolutional neural network with spectrograms as the
features, removing the need to specify features explicitly. We trained and
evaluated the algorithm using three subsets that are larger than previously
seen in the literature. We evaluated the performance of the method using two
methods. First, discrete count of agreed breathing phases (using 50% overlap
between a pair of boxes), shows a mean agreement with lung sound experts of 97%
for inspiration and 87% for expiration. Second, the fraction of time of
agreement (in seconds) gives higher pseudo-kappa values for inspiration
(0.73-0.88) than expiration (0.63-0.84), showing an average sensitivity of 97%
and an average specificity of 84%. With both evaluation methods, the agreement
between the annotators and the algorithm shows human level performance for the
algorithm. The developed algorithm is valid for detecting breathing phases in
lung sound recordings
- …