168 research outputs found
Unsupervised Active Learning: Optimizing Labeling Cost-Effectiveness for Automatic Speech Recognition
In recent years, speech-based self-supervised learning (SSL) has made
significant progress in various tasks, including automatic speech recognition
(ASR). An ASR model with decent performance can be realized by fine-tuning an
SSL model with a small fraction of labeled data. Reducing the demand for
labeled data is always of great practical value. In this paper, we further
extend the use of SSL to cut down labeling costs with active learning. Three
types of units on different granularities are derived from speech signals in an
unsupervised way, and their effects are compared by applying a contrastive data
selection method. The experimental results show that our proposed data
selection framework can effectively improve the word error rate (WER) by more
than 11% with the same amount of labeled data, or halve the labeling cost while
maintaining the same WER, compared to random selection.Comment: 5 pages, 3 figures. Accepted to Interspeech 202
Distilling Knowledge from Resource Management Algorithms to Neural Networks: A Unified Training Assistance Approach
As a fundamental problem, numerous methods are dedicated to the optimization
of signal-to-interference-plus-noise ratio (SINR), in a multi-user setting.
Although traditional model-based optimization methods achieve strong
performance, the high complexity raises the research of neural network (NN)
based approaches to trade-off the performance and complexity. To fully leverage
the high performance of traditional model-based methods and the low complexity
of the NN-based method, a knowledge distillation (KD) based algorithm
distillation (AD) method is proposed in this paper to improve the performance
and convergence speed of the NN-based method, where traditional SINR
optimization methods are employed as ``teachers" to assist the training of NNs,
which are ``students", thus enhancing the performance of unsupervised and
reinforcement learning techniques. This approach aims to alleviate common
issues encountered in each of these training paradigms, including the
infeasibility of obtaining optimal solutions as labels and overfitting in
supervised learning, ensuring higher convergence performance in unsupervised
learning, and improving training efficiency in reinforcement learning.
Simulation results demonstrate the enhanced performance of the proposed
AD-based methods compared to traditional learning methods. Remarkably, this
research paves the way for the integration of traditional optimization insights
and emerging NN techniques in wireless communication system optimization
Digital Twin-Assisted Efficient Reinforcement Learning for Edge Task Scheduling
Task scheduling is a critical problem when one user offloads multiple
different tasks to the edge server. When a user has multiple tasks to offload
and only one task can be transmitted to server at a time, while server
processes tasks according to the transmission order, the problem is NP-hard.
However, it is difficult for traditional optimization methods to quickly obtain
the optimal solution, while approaches based on reinforcement learning face
with the challenge of excessively large action space and slow convergence. In
this paper, we propose a Digital Twin (DT)-assisted RL-based task scheduling
method in order to improve the performance and convergence of the RL. We use DT
to simulate the results of different decisions made by the agent, so that one
agent can try multiple actions at a time, or, similarly, multiple agents can
interact with environment in parallel in DT. In this way, the exploration
efficiency of RL can be significantly improved via DT, and thus RL can
converges faster and local optimality is less likely to happen. Particularly,
two algorithms are designed to made task scheduling decisions, i.e.,
DT-assisted asynchronous Q-learning (DTAQL) and DT-assisted exploring
Q-learning (DTEQL). Simulation results show that both algorithms significantly
improve the convergence speed of Q-learning by increasing the exploration
efficiency
Fast-HuBERT: An Efficient Training Framework for Self-Supervised Speech Representation Learning
Recent years have witnessed significant advancements in self-supervised
learning (SSL) methods for speech-processing tasks. Various speech-based SSL
models have been developed and present promising performance on a range of
downstream tasks including speech recognition. However, existing speech-based
SSL models face a common dilemma in terms of computational cost, which might
hinder their potential application and in-depth academic research. To address
this issue, we first analyze the computational cost of different modules during
HuBERT pre-training and then introduce a stack of efficiency optimizations,
which is named Fast-HuBERT in this paper. The proposed Fast-HuBERT can be
trained in 1.1 days with 8 V100 GPUs on the Librispeech 960h benchmark, without
performance degradation, resulting in a 5.2x speedup, compared to the original
implementation. Moreover, we explore two well-studied techniques in the
Fast-HuBERT and demonstrate consistent improvements as reported in previous
work
Imperfect Digital Twin Assisted Low Cost Reinforcement Training for Multi-UAV Networks
Deep Reinforcement Learning (DRL) is widely used to optimize the performance
of multi-UAV networks. However, the training of DRL relies on the frequent
interactions between the UAVs and the environment, which consumes lots of
energy due to the flying and communication of UAVs in practical experiments.
Inspired by the growing digital twin (DT) technology, which can simulate the
performance of algorithms in the digital space constructed by coping features
of the physical space, the DT is introduced to reduce the costs of practical
training, e.g., energy and hardware purchases. Different from previous
DT-assisted works with an assumption of perfect reflecting real physics by
virtual digital, we consider an imperfect DT model with deviations for
assisting the training of multi-UAV networks. Remarkably, to trade off the
training cost, DT construction cost, and the impact of deviations of DT on
training, the natural and virtually generated UAV mixing deployment method is
proposed. Two cascade neural networks (NN) are used to optimize the joint
number of virtually generated UAVs, the DT construction cost, and the
performance of multi-UAV networks. These two NNs are trained by unsupervised
and reinforcement learning, both low-cost label-free training methods.
Simulation results show the training cost can significantly decrease while
guaranteeing the training performance. This implies that an efficient decision
can be made with imperfect DTs in multi-UAV networks
Exploring Effective Distillation of Self-Supervised Speech Models for Automatic Speech Recognition
Recent years have witnessed great strides in self-supervised learning (SSL)
on the speech processing. The SSL model is normally pre-trained on a great
variety of unlabelled data and a large model size is preferred to increase the
modeling capacity. However, this might limit its potential applications due to
the expensive computation and memory costs introduced by the oversize model.
Miniaturization for SSL models has become an important research direction of
practical value. To this end, we explore the effective distillation of
HuBERT-based SSL models for automatic speech recognition (ASR). First, in order
to establish a strong baseline, a comprehensive study on different student
model structures is conducted. On top of this, as a supplement to the
regression loss widely adopted in previous works, a discriminative loss is
introduced for HuBERT to enhance the distillation performance, especially in
low-resource scenarios. In addition, we design a simple and effective algorithm
to distill the front-end input from waveform to Fbank feature, resulting in 17%
parameter reduction and doubling inference speed, at marginal performance
degradation.Comment: Submitted to ICASSP 202
Online near-infrared analysis coupled with MWPLS and SiPLS models for the multi-ingredient and multi-phase extraction of licorice (Gancao)
Additional file 1. Table S1. The sampling intervals in different extraction phases. Table S2. The HPLC results of different indicators. Table S3. The evaluation parameters of PLS and SiPLS models
Determination of erlotinib in rabbit plasma by liquid chromatography mass spectrometry
A sensitive and selective liquid chromatography mass spectrometry (LC–MS) method for determination of erlotinib in rabbit plasma was developed. After addition of midazolam as internal standard (IS), protein precipitation by acetonitrile was used as sample preparation. Chromatographic separation was achieved on a Zorbax SB-C18 (2.1 × 150 mm, 5 μm) column with acetonitrile-0.1 % formic acid as mobile phase with gradient elution. Electrospray ionization (ESI) source was applied and operated in positive ion mode; multiple reaction monitoring (MRM) mode was used to quantification using target fragment ions m/z 394→336 for erlotinib and m/z 326→291 for the IS. Calibration plots were linear over the range of 5-2000 ng/mL for erlotinib in plasma. Lower limit of quantification (LLOQ) for erlotinib was 5 ng/mL. Mean recovery of erlotinib from plasma was in the range 84.5-95.7 %. CV of intra-day and interday precision were both less than 12 %. This method is simple and sensitive enough to be used in pharmacokinetic research for determination of erlotinib in rabbit plasma.Colegio de Farmacéuticos de la Provincia de Buenos Aire
Determination of erlotinib in rabbit plasma by liquid chromatography mass spectrometry
A sensitive and selective liquid chromatography mass spectrometry (LC–MS) method for determination of erlotinib in rabbit plasma was developed. After addition of midazolam as internal standard (IS), protein precipitation by acetonitrile was used as sample preparation. Chromatographic separation was achieved on a Zorbax SB-C18 (2.1 × 150 mm, 5 μm) column with acetonitrile-0.1 % formic acid as mobile phase with gradient elution. Electrospray ionization (ESI) source was applied and operated in positive ion mode; multiple reaction monitoring (MRM) mode was used to quantification using target fragment ions m/z 394→336 for erlotinib and m/z 326→291 for the IS. Calibration plots were linear over the range of 5-2000 ng/mL for erlotinib in plasma. Lower limit of quantification (LLOQ) for erlotinib was 5 ng/mL. Mean recovery of erlotinib from plasma was in the range 84.5-95.7 %. CV of intra-day and interday precision were both less than 12 %. This method is simple and sensitive enough to be used in pharmacokinetic research for determination of erlotinib in rabbit plasma.Colegio de Farmacéuticos de la Provincia de Buenos Aire
DUSP6: Potential interactions with FXR1P in the nervous system
229-237Fragile X syndrome (FXS) is a leading genetic cause of autism intellectual disorder and autism spectrum disorder
(ASD), with either limited treatment options or incurable. Fragile X-related gene 1 (FXR1) is a homolog of the Fragile
X mental retardation gene 1 (FMR1), the causative gene of FXS, and both are highly homologous and functionally identical.
In FXS, both PI3K (AKT/mTOR signaling pathway) and ERK1/2 (MAPK signaling pathway) expression levels were
abnormal. Dual specificity phosphatase 6 (DUSP6) is a member of the mitogen-activated protein kinases (MAPKs) that
participates in the crosstalk between the two signaling systems of MEK/ERK and mTOR. By interacting with multiple nodes
of MAPK and PI3K/AKT signaling pathways (including the mTOR complex), DUSP6 regulates cellular growth,
proliferation, metabolism and participates in pathological processes of cancer and cognitive impairment. However, whether
there is an interaction between FXR1P and DUSP6 and the effects of DUSP6 on the growth of SK-N-SH cells remains
elusive. As demonstrated by our results, FXR1P was identified in the cytoplasm and nucleus of SK-N-SH cells co-localized
with DUSP6, which might have regulated ERK1/2 signaling pathways in SK-N-SH cells. To a certain extent, FXR1P may
reverse the negative regulation of ERK1/2 by DUSP6. Moreover, we discovered that not only does DUSP6 inhibit
proliferation, but it also promotes the apoptosis of SK-N-SH cells
- …