1,362 research outputs found

    Management of mother-to-child transmission of hepatitis B virus: Propositions and challenges

    Get PDF
    AbstractChronic hepatitis B virus (HBV) infection due to mother-to-child transmission (MTCT) during perinatal period remains an important global health problem. Despite standard passiveā€“active immunoprophylaxis with hepatitis B immunoglobulin (HBIG) and hepatitis B vaccine in neonates, up to 9% of newborns still acquire HBV infection, especially these from hepatitis B e antigen (HBeAg) positive mothers. Management of HBV infection in pregnancy still need to draw careful attention because of some controversial aspects, including the failure of passive-active immunoprophylaxis in a fraction of newborns, the effect and necessity of periodical hepatitis B immunoglobulin (HBIG) injection to the mothers, the safety of antiviral prophylaxis with nucleoside/nucleotide analogs, the benefit of different delivery ways, and the safety of breastfeeding. In this review, we highlight these unsettled issues of preventive strategies in perinatal period, and we further aim to provide an optimal approach to the management of preventing MTCT of HBV infection

    One Fits All:Power General Time Series Analysis by Pretrained LM

    Full text link
    Although we have witnessed great success of pre-trained models in natural language processing (NLP) and computer vision (CV), limited progress has been made for general time series analysis. Unlike NLP and CV where a unified model can be used to perform different tasks, specially designed approach still dominates in each time series analysis task such as classification, anomaly detection, forecasting, and few-shot learning. The main challenge that blocks the development of pre-trained model for time series analysis is the lack of a large amount of data for training. In this work, we address this challenge by leveraging language or CV models, pre-trained from billions of tokens, for time series analysis. Specifically, we refrain from altering the self-attention and feedforward layers of the residual blocks in the pre-trained language or image model. This model, known as the Frozen Pretrained Transformer (FPT), is evaluated through fine-tuning on all major types of tasks involving time series. Our results demonstrate that pre-trained models on natural language or images can lead to a comparable or state-of-the-art performance in all main time series analysis tasks, as illustrated in Figure 1. We also found both theoretically and empirically that the self-attention module behaviors similarly to principle component analysis (PCA), an observation that helps explains how transformer bridges the domain gap and a crucial step towards understanding the universality of a pre-trained transformer.The code is publicly available at https://github.com/DAMO-DI-ML/One_Fits_All.Comment: Neurips 2023 Spotligh

    One Fits All: Universal Time Series Analysis by Pretrained LM and Specially Designed Adaptors

    Full text link
    Despite the impressive achievements of pre-trained models in the fields of natural language processing (NLP) and computer vision (CV), progress in the domain of time series analysis has been limited. In contrast to NLP and CV, where a single model can handle various tasks, time series analysis still relies heavily on task-specific methods for activities such as classification, anomaly detection, forecasting, and few-shot learning. The primary obstacle to developing a pre-trained model for time series analysis is the scarcity of sufficient training data. In our research, we overcome this obstacle by utilizing pre-trained models from language or CV, which have been trained on billions of data points, and apply them to time series analysis. We assess the effectiveness of the pre-trained transformer model in two ways. Initially, we maintain the original structure of the self-attention and feedforward layers in the residual blocks of the pre-trained language or image model, using the Frozen Pre-trained Transformer (FPT) for time series analysis with the addition of projection matrices for input and output. Additionally, we introduce four unique adapters, designed specifically for downstream tasks based on the pre-trained model, including forecasting and anomaly detection. These adapters are further enhanced with efficient parameter tuning, resulting in superior performance compared to all state-of-the-art methods.Our comprehensive experimental studies reveal that (a) the simple FPT achieves top-tier performance across various time series analysis tasks; and (b) fine-tuning the FPT with the custom-designed adapters can further elevate its performance, outshining specialized task-specific models.Comment: this article draws heavily from arXiv:2302.1193

    Sex-differential effects of olanzapine vs. aripiprazole on glucose and lipid metabolism in first-episode schizophrenia

    Get PDF
    Objective: To compare sex difference in metabolic effect of olanzapine versus aripiprazole on schizophrenia. Methods: A twelve-week prospective open-label cohort study to compare four subgroups according to first-episode schizophrenia patientsā€™ type of drug usage and sex: female aripiprazole (n = 11), male aripiprazole (n = 11), female olanzapine (n = 10), and male olanzapine (n = 11) for body mass index, fasting serum triglyceride, total cholesterol, high-density lipoprotein cholesterol, low-density lipoprotein cholesterol, and fasting glucose. Results: Aripiprazole may be associated with weight gain in female patients with low-baseline weight. Aripiprazole may have an adverse effect of weight and favorable effects of circulating glucose and lipid on female over male schizophrenia patients. The aripiprazoleā€“induced changes in glucose and lipid may be independent of body fat storage, especially for female schizophrenia patients. Olanzapine may have adverse effects of weight, glucose and lipid profiles on female over male schizophrenic patients. Discussion: Our findings fill the gap in knowledge and provide a sex-specific guidance to psychiatrist better tailoring treatment to individual sex-differential characteristics and a key clue to understand the sex-differential mechanism of antipsychotics-induced metabolic dysfunction

    Make Transformer Great Again for Time Series Forecasting: Channel Aligned Robust Dual Transformer

    Full text link
    Recent studies have demonstrated the great power of deep learning methods, particularly Transformer and MLP, for time series forecasting. Despite its success in NLP and CV, many studies found that Transformer is less effective than MLP for time series forecasting. In this work, we design a special Transformer, i.e., channel-aligned robust dual Transformer (CARD for short), that addresses key shortcomings of Transformer in time series forecasting. First, CARD introduces a dual Transformer structure that allows it to capture both temporal correlations among signals and dynamical dependence among multiple variables over time. Second, we introduce a robust loss function for time series forecasting to alleviate the potential overfitting issue. This new loss function weights the importance of forecasting over a finite horizon based on prediction uncertainties. Our evaluation of multiple long-term and short-term forecasting datasets demonstrates that CARD significantly outperforms state-of-the-art time series forecasting methods, including both Transformer and MLP-based models

    Challenges and Countermeasures of Teachersā€™ Professional Development from the Perspective of Globalization

    Get PDF
    Under the influence of globalization, the intensification of international competition is in the final analysis the competition of education and talents. Teachers are the main body of teaching activities. Teachersā€™ professionalization is an important symbol to measure a countryā€™s educational level. At present, the professional development of teachers in China is facing enormous challenges. In terms of concept, system and the allocation and utilization of educational resources, there are great problems. In order to meet the needs of teachersā€™ professional development in the perspective of globalization, the new era should promote the upgrading of teachersā€™ professional level from the aspects of renewal of ideas, improvement of system guarantee system, diversified development and integration and distribution of resources

    Comparison of the efficacy of tenofovir and adefovir in the treatment of chronic hepatitis B: A Systematic Review

    Get PDF
    Chronic viral hepatitis B remains a global public health concern. Currently, several drugs, such as tenofovir and adefovir, are recommended for treatment of patients with chronic hepatitis B. tenofovir is a nucleoside analog with selective activity against hepatitis b virus and has been shown to be more potent in vitro than adefovir. But the results of trials comparing tenofovir and adefovir in the treatment of chronic hepatitis B were inconsistent. However, there was no systematic review on the comparison of the efficacy of tenofovir and adefovir in the treatment of chronic hepatitis B. To evaluate the comparison of the efficacy of tenofovir and adefovir in the treatment of chronic hepatitis B we conducted a systematic review and meta-analysis of clinical trials. We searched PUBMED, Web of Science, EMBASE, CNKI, VIP database, WANFANG database, the Cochrane Central Register of Controlled Trials and the Cochrane Database of Systematic Review. Finally six studies were left for analysis which involved 910 patients in total, of whom 576 were included in tenofovir groups and 334 were included in adefovir groups. At the end of 48-week treatment, tenofovir was superior to adefovir at the HBV-DNA suppression in patients[RR = 2.59; 95%CI(1.01-6.67), P = 0.05]. While there was no significant difference in the ALT normalization[RR = 1.15; 95%CI(0.96-1.37), P = 0.14], HBeAg seroconversion[RR = 1.32; 95%CI(1.00-1.75), P = 0.05] and HBsAg loss rate[RR = 1.19; 95%CI(0.74-1.91), P = 0.48]. More high-quality, well-designed, randomized controlled, multi-center trails are clearly needed to guide evolving standards of care for chronic hepatitis B

    FiLM: Frequency improved Legendre Memory Model for Long-term Time Series Forecasting

    Full text link
    Recent studies have shown that deep learning models such as RNNs and Transformers have brought significant performance gains for long-term forecasting of time series because they effectively utilize historical information. We found, however, that there is still great room for improvement in how to preserve historical information in neural networks while avoiding overfitting to noise presented in the history. Addressing this allows better utilization of the capabilities of deep learning models. To this end, we design a \textbf{F}requency \textbf{i}mproved \textbf{L}egendre \textbf{M}emory model, or {\bf FiLM}: it applies Legendre Polynomials projections to approximate historical information, uses Fourier projection to remove noise, and adds a low-rank approximation to speed up computation. Our empirical studies show that the proposed FiLM significantly improves the accuracy of state-of-the-art models in multivariate and univariate long-term forecasting by (\textbf{20.3\%}, \textbf{22.6\%}), respectively. We also demonstrate that the representation module developed in this work can be used as a general plug-in to improve the long-term prediction performance of other deep learning modules. Code is available at https://github.com/tianzhou2011/FiLM/Comment: Accepted by The Thirty-Sixth Annual Conference on Neural Information Processing Systems (NeurIPS 2022

    Model-free Test Time Adaptation for Out-Of-Distribution Detection

    Full text link
    Out-of-distribution (OOD) detection is essential for the reliability of ML models. Most existing methods for OOD detection learn a fixed decision criterion from a given in-distribution dataset and apply it universally to decide if a data point is OOD. Recent work~\cite{fang2022is} shows that given only in-distribution data, it is impossible to reliably detect OOD data without extra assumptions. Motivated by the theoretical result and recent exploration of test-time adaptation methods, we propose a Non-Parametric Test Time \textbf{Ada}ptation framework for \textbf{O}ut-Of-\textbf{D}istribution \textbf{D}etection (\abbr). Unlike conventional methods, \abbr utilizes online test samples for model adaptation during testing, enhancing adaptability to changing data distributions. The framework incorporates detected OOD instances into decision-making, reducing false positive rates, particularly when ID and OOD distributions overlap significantly. We demonstrate the effectiveness of \abbr through comprehensive experiments on multiple OOD detection benchmarks, extensive empirical studies show that \abbr significantly improves the performance of OOD detection over state-of-the-art methods. Specifically, \abbr reduces the false positive rate (FPR95) by 23.23%23.23\% on the CIFAR-10 benchmarks and 38%38\% on the ImageNet-1k benchmarks compared to the advanced methods. Lastly, we theoretically verify the effectiveness of \abbr.Comment: 12 pages, 10 figure
    • ā€¦
    corecore