47 research outputs found

    LncRNA GAS5 Knockdown Mitigates Hepatic Lipid Accumulation via Regulating MiR-26a-5p/PDE4B to Activate cAMP/CREB Pathway

    Get PDF
    ObjectiveNon-alcoholic fatty liver disease (NAFLD) can be attributed to the dysregulation of hepatic lipid metabolism; however, its cellular and molecular mechanisms remain unclear. This study aims to explore the effect of long non-coding RNA growth arrest specific 5 (GAS5) on hepatic lipid metabolism in fatty liver models.MethodsObese mice, high fat diet-fed mice and free fatty acid-stimulated cells were used for GAS5 expression detection. GAS5 overexpression or knockdown models were established to elucidate the regulatory function of GAS5 in de novo lipogenesis (DNL) and mitochondrial function. Bioinformatic analyses and dual luciferase assays were used to investigate the interaction between GAS5, miR-26a-5p and phosphodiesterase (PDE) 4B. The involvement of the cyclic adenosine monophosphate (cAMP)/cAMP-response element-binding protein (CREB) pathway was evaluated using H89 and forskolin treatment.ResultsGAS5 was activated in vitro and in vivo fatty liver models. Knockdown of GAS5 reduced lipid droplet accumulation, DNL associated enzymes and preserved mitochondrial function, while GAS5 overexpression exacerbated hepatic lipid accumulation. Mechanistically, GAS5 sponged miR-26a-5p to increase PDE4B expression and subsequently modulated DNL and mitochondrial function via the cAMP/CREB pathway.ConclusionDownregulation of GAS5 can activate the cAMP/CREB pathway through miR-26a-5p/PDE4B axis to mitigate hepatic lipid accumulation. This study provides evidence that downregulation of GAS5 may be a potential therapeutic option for the treatment of NAFLD

    Postoperative acute kidney injury after on-pump cardiac surgery in patients with connective tissue disease

    Get PDF
    ObjectivePatients with connective tissue disease have a poor prognosis after receiving cardiac surgery. This study described the clinical scenarios and investigated factors correlated with acute kidney injury (AKI) after on-pump cardiac surgery in patients with systemic lupus erythematosus (SLE) or vasculitis.MethodsPatients with SLE or vasculitis who underwent on-pump cardiac surgery from March 2002 to March 2022 were enrolled, while patients with preoperative renal dysfunction were excluded. AKI was defined according to the Kidney Disease: Improving Global Outcomes (KDIGO) criteria. Uni- and multivariable analyses were performed to identify potential factors associated with postoperative AKI.ResultsAmong 123 patients enrolled, 39 (31.7%) developed AKI within seven days after receiving on-pump cardiac surgery. Four patients died in the hospital, resulting in an overall in-hospital mortality of 3.3%, and all deaths occurred in the AKI group. Patients in the AKI group also had longer ICU stays (median difference 3.0 day, 95% CI: 1.0–4.0, P < 0.001) and extubation time (median difference 1.0 days, 95% CI: 0–2.0, P < 0.001) than those in the non-AKI group. Multivariable logistic regression revealed that BMI over 24 kg/m2 (OR: 3.00, 95% CI: 1.24–7.28) and comorbid SLE (OR: 4.73, 95% CI: 1.73–12.93) were independently correlated with postoperative AKI.ConclusionFactors potentially correlated with AKI following on-pump cardiac surgery in patients with connective tissue disease were explored. Clinicians should pay more attention to preoperative evaluation and intraoperative management in patients with risk factors

    Incorporating Context Information into Deep Neural Network Acoustic Models

    No full text
    The introduction of deep neural networks (DNNs) has advanced the performance of automatic speech recognition (ASR) tremendously. On a wide range of ASR tasks, DNN models show superior performance than the traditional Gaussian mixture models (GMMs). Although making significant advances, DNN models still suffer from data scarcity, speaker mismatch and environment variability. This thesis resolves these challenges by fully exploiting DNNs’ ability of integrating heterogeneous features under the same optimization objective. We propose to improve DNN models under these challenging conditions by incorporating context information into DNN training.  On a new language, the amount of training data may become highly limited. This data scarcity causes degradation on the recognition accuracy of DNN models. A solution is to transfer knowledge from other languages to the low-resource condition. This thesis proposes a framework to build cross-language DNNs via languageuniversal feature extractors (LUFEs). Convolutional neural networks (CNNs) and deep maxout networks (DMNs) are employed to improve the quality of LUFEs, which enables the generation of invariant and sparse feature representations. This framework notably improves the recognition accuracy on a wide range of low-resource languages.  The performance of DNNs degrades when the mismatch between acoustic models and testing speakers exists. A form of context information which encapsulates speaker characteristics is i-vectors. This thesis proposes a novel framework to perform feature-space speaker adaptive training (SAT) for DNN models. A key component of this approach is an adaptation network which takes i-vectors as inputs and projects DNN inputs into a normalized feature space. The DNN model fine-tuned in this new feature space rules out speaker variability and becomes more independent of specific speakers. This SAT method is applicable to different feature types and model architectures.  The proposed adaptive training framework is further extended to incorporate distance- and video-related context information. The distance descriptors are extracted from deep learning models which are trained to distinguish distance types on the frame level. Distance adaptive training (DAT) using these descriptors captures speaker-microphone distance dynamically on the frame level. When performing ASR on video data, we naturally have access to both the speech and the video modality. Video- and segment-level visual features are extracted from the video stream. Video adaptive training (VAT) with these visual features results in more robust acoustic models that are agnostic to environment variability. Moreover, the proposed VAT approach removes the need for frame-level visual features and thus achieves audio-visual ASR on truly open-domain videos. </p

    Improving Low-Resource CD-DNN-HMM Using Dropout and Multilingual DNN Training

    No full text
    <p>We investigate two strategies to improve the context-dependent deep neural network hidden Markov model (CD-DNN-HMM) in low-resource speech recognition. Although outperforming the conventional Gaussian mixture model (GMM) HMM on various tasks, CD-DNN-HMM acoustic modeling becomes challenging with limited transcribed speech, e.g., less than 10 hours. To resolve this issue, we firstly exploit dropout which prevents overfitting in DNN finetuning and improves model robustness under data sparseness. Then, the effectiveness of multilingual DNN training is evaluated when additional auxiliary languages are available. The hidden layer parameters of the target language are shared and learned over multiple languages. Experiments show that both strategies boost the recognition performance significantly. Combining them results in further reduction in word error rate, achieving 11.6% and 6.2% relative improvement on two limited data conditions.</p

    Deep Maxout Networks for Low-Resource Speech Recognition

    No full text
    ABSTRACT As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally and show stateof-the-art results on various computer vision datasets. This paper investigates the application of deep maxout networks (DMNs) to large vocabulary continuous speech recognition (LVCSR) tasks. Our focus is on the particular advantage of DMNs under low-resource conditions with limited transcribed speech. We extend DMNs to hybrid and bottleneck feature systems, and explore optimal network structures (number of maxout layers, pooling strategy, etc) for both setups. On the newly released Babel corpus, behaviors of DMNs are extensively studied under different levels of data availability. Experiments show that DMNs improve low-resource speech recognition significantly. Moreover, DMNs introduce sparsity to their hidden activations and thus can act as sparse feature extractors

    Improving Language-Universal Feature Extraction with Deep Maxout and Convolutional Neural Networks

    No full text
    When deployed in automated speech recognition (ASR), deep neural networks (DNNs) can be treated as a complex feature extractor plus a simple linear classifier. Previous work has investigated the utility of multilingual DNNs acting as language-universal feature extractors (LUFEs). In this paper, we explore different strategies to further improve LUFEs. First, we replace the standard sigmoid nonlinearity with the recently proposed maxout units. The resulting maxout LUFEs have the nice property of generating sparse feature representations. Second, the convolutional neural network (CNN) architecture is applied to obtain more invariant feature space. We evaluate the performance of LUFEs on a cross-language ASR task. Each of the proposed techniques results in word error rate reduction compared with the existing DNN-based LUFEs. Combining the two methods together brings additional improvement on the target language.</p

    Distributed Learning of Multilingual DNN Feature Extractors using GPUs

    No full text
    Abstract Multilingual deep neural networks (DNNs) can act as deep feature extractors and have been applied successfully to crosslanguage acoustic modeling. Learning these feature extractors becomes an expensive task, because of the enlarged multilingual training data and the sequential nature of stochastic gradient descent (SGD). This paper investigates strategies to accelerate the learning process over multiple GPU cards. We propose the DistModel and DistLang frameworks which distribute feature extractor learning by models and languages respectively. The time-synchronous DistModel has the nice property of tolerating infrequent model averaging. With 3 GPUs, DistModel achieves 2.6× speed-up and causes no loss on word error rates. When using DistLang, we observe better acceleration but worse recognition performance. Further evaluations are conducted to scale DistModel to more languages and GPU cards

    Examining the Impact of Different Periodic Functions on Short-Term Freeway Travel Time Prediction Approaches

    No full text
    Freeway travel time prediction is a key technology of Intelligent Transportation Systems (ITS). Many scholars have found that periodic function plays a positive role in improving the prediction accuracy of travel time prediction models. However, very few studies have comprehensively evaluated the impacts of different periodic functions on statistical and machine learning models. In this paper, our primary objective is to evaluate the performance of the six commonly used multistep ahead travel time prediction models (three statistical models and three machine learning models). In addition, we compared the impacts of three periodic functions on multistep ahead travel time prediction for different temporal scales (5-minute, 10-minute, and 15-minute). The results indicate that the periodic functions can improve the prediction performance of machine learning models for more than 60 minutes ahead prediction and improve the over 30 minutes ahead prediction accuracy for statistical models. Three periodic functions show a slight difference in improving the prediction accuracy of the six prediction models. For the same prediction step, the effect of the periodic function is more obvious at a higher level of aggregation

    Deep maxout networks for low-resource speech recognition

    No full text
    <p>As a feed-forward architecture, the recently proposed maxout networks integrate dropout naturally and show state-of-the-art results on various computer vision datasets. This paper investigates the application of deep maxout networks (DMNs) to large vocabulary continuous speech recognition (LVCSR) tasks. Our focus is on the particular advantage of DMNs under low-resource conditions with limited transcribed speech. We extend DMNs to hybrid and bottleneck feature systems, and explore optimal network structures (number of maxout layers, pooling strategy, etc) for both setups. On the newly released Babel corpus, behaviors of DMNs are extensively studied under different levels of data availability. Experiments show that DMNs improve low-resource speech recognition significantly. Moreover, DMNs introduce sparsity to their hidden activations and thus can act as sparse feature extractors.</p
    corecore