45 research outputs found
Clustering based Multiple Anchors High-Dimensional Model Representation
In this work, a cut high-dimensional model representation (cut-HDMR)
expansion based on multiple anchors is constructed via the clustering method.
Specifically, a set of random input realizations is drawn from the parameter
space and grouped by the centroidal Voronoi tessellation (CVT) method. Then for
each cluster, the centroid is set as the reference, thereby the corresponding
zeroth-order term can be determined directly. While for non-zero order terms of
each cut-HDMR, a set of discrete points is selected for each input component,
and the Lagrange interpolation method is applied. For a new input, the cut-HDMR
corresponding to the nearest centroid is used to compute its response.
Numerical experiments with high-dimensional integral and elliptic stochastic
partial differential equation as backgrounds show that the CVT based multiple
anchors cut-HDMR can alleviate the negative impact of a single inappropriate
anchor point, and has higher accuracy than the average of several expansions
LightGrad: Lightweight Diffusion Probabilistic Model for Text-to-Speech
Recent advances in neural text-to-speech (TTS) models bring thousands of TTS
applications into daily life, where models are deployed in cloud to provide
services for customs. Among these models are diffusion probabilistic models
(DPMs), which can be stably trained and are more parameter-efficient compared
with other generative models. As transmitting data between customs and the
cloud introduces high latency and the risk of exposing private data, deploying
TTS models on edge devices is preferred. When implementing DPMs onto edge
devices, there are two practical problems. First, current DPMs are not
lightweight enough for resource-constrained devices. Second, DPMs require many
denoising steps in inference, which increases latency. In this work, we present
LightGrad, a lightweight DPM for TTS. LightGrad is equipped with a lightweight
U-Net diffusion decoder and a training-free fast sampling technique, reducing
both model parameters and inference latency. Streaming inference is also
implemented in LightGrad to reduce latency further. Compared with Grad-TTS,
LightGrad achieves 62.2% reduction in paramters, 65.7% reduction in latency,
while preserving comparable speech quality on both Chinese Mandarin and English
in 4 denoising steps.Comment: Accepted by ICASSP 202
ZeroPrompt: Streaming Acoustic Encoders are Zero-Shot Masked LMs
In this paper, we present ZeroPrompt (Figure 1-(a)) and the corresponding
Prompt-and-Refine strategy (Figure 3), two simple but effective
\textbf{training-free} methods to decrease the Token Display Time (TDT) of
streaming ASR models \textbf{without any accuracy loss}. The core idea of
ZeroPrompt is to append zeroed content to each chunk during inference, which
acts like a prompt to encourage the model to predict future tokens even before
they were spoken. We argue that streaming acoustic encoders naturally have the
modeling ability of Masked Language Models and our experiments demonstrate that
ZeroPrompt is engineering cheap and can be applied to streaming acoustic
encoders on any dataset without any accuracy loss. Specifically, compared with
our baseline models, we achieve 350 700ms reduction on First Token
Display Time (TDT-F) and 100 400ms reduction on Last Token Display Time
(TDT-L), with theoretically and experimentally equal WER on both Aishell-1 and
Librispeech datasets.Comment: accepted by interspeech 202
Fast-U2++: Fast and Accurate End-to-End Speech Recognition in Joint CTC/Attention Frames
Recently, the unified streaming and non-streaming two-pass (U2/U2++)
end-to-end model for speech recognition has shown great performance in terms of
streaming capability, accuracy and latency. In this paper, we present
fast-U2++, an enhanced version of U2++ to further reduce partial latency. The
core idea of fast-U2++ is to output partial results of the bottom layers in its
encoder with a small chunk, while using a large chunk in the top layers of its
encoder to compensate the performance degradation caused by the small chunk.
Moreover, we use knowledge distillation method to reduce the token emission
latency. We present extensive experiments on Aishell-1 dataset. Experiments and
ablation studies show that compared to U2++, fast-U2++ reduces model latency
from 320ms to 80ms, and achieves a character error rate (CER) of 5.06% with a
streaming setup.Comment: 5 pages, 3 figure
TrimTail: Low-Latency Streaming ASR with Simple but Effective Spectrogram-Level Length Penalty
In this paper, we present TrimTail, a simple but effective emission
regularization method to improve the latency of streaming ASR models. The core
idea of TrimTail is to apply length penalty (i.e., by trimming trailing frames,
see Fig. 1-(b)) directly on the spectrogram of input utterances, which does not
require any alignment. We demonstrate that TrimTail is computationally cheap
and can be applied online and optimized with any training loss or any model
architecture on any dataset without any extra effort by applying it on various
end-to-end streaming ASR networks either trained with CTC loss [1] or
Transducer loss [2]. We achieve 100 200ms latency reduction with equal
or even better accuracy on both Aishell-1 and Librispeech. Moreover, by using
TrimTail, we can achieve a 400ms algorithmic improvement of User Sensitive
Delay (USD) with an accuracy loss of less than 0.2.Comment: submitted to ICASSP 202
fNIRS-based study of prefrontal cortex activation during pelvic floor muscle contraction in women under different bladder states
Objective To provide a neuroimaging basis for exploring the role of the prefrontal cortex in human urinary control function. Methods Hemodynamic data from the prefrontal cortex of the brain during the task of pelvic floor muscle contraction from 20 healthy female volunteers were collected using functional near-infrared spectroscopy (fNIRS) under two different states of bladder filling and emptying, and these data were processed accordingly to compare the differences in the activation state among different brain compartments of the prefrontal cortex by analyzing the Beta values corresponding to the relative amount of changes in the concentration of oxyhemoglobin extracted from each individual channel. Results A total of 30 channels were activated during bladder filling, whereas 8 channels were activated during bladder emptying (all P < 0.05), including 7 co-activated channels. The prefrontal cortex activation was more significant during bladder filling than bladder emptying, and the activation was predominantly in the right prefrontal cortex, with the differences mainly in the right dorsolateral prefrontal cortex and frontopolar cortex (all P < 0.05). Conclusions The prefrontal cortex can be activated by pelvic floor muscle contraction. Under the state of bladder filling, the prefrontal cortex may perceive the pressure change of the bladder through neural reflex activity and thus participate in the regulation of the voluntary pelvic floor muscle contraction, plays a role in human urinary control function. The right dorsolateral prefrontal cortex region possibly plays a more significant role in this process
Recommended from our members
Lessons learned from the 2019-nCoV epidemic on prevention of future infectious diseases.
Only a month after the outbreak of pneumonia caused by 2019-nCoV, more than forty-thousand people were infected. This put enormous pressure on the Chinese government, medical healthcare provider, and the general public, but also made the international community deeply nervous. On the 25th day after the outbreak, the Chinese government implemented strict traffic restrictions on the area where the 2019-nCoV had originated-Hubei province, whose capital city is Wuhan. Ten days later, the rate of increase of cases in Hubei showed a significant difference (p = 0.0001) compared with the total rate of increase in other provinces of China. These preliminary data suggest the effectiveness of a traffic restriction policy for this pandemic thus far. At the same time, solid financial support and improved research ability, along with network communication technology, also greatly facilitated the application of epidemic prevention measures. These measures were motivated by the need to provide effective treatment of patients, and involved consultation with three major groups in policy formulation-public health experts, the government, and the general public. It was also aided by media and information technology, as well as international cooperation. This experience will provide China and other countries with valuable lessons for quickly coordinating and coping with future public health emergencies