182 research outputs found

    Novel antiviral effect of a small molecule ONC201 and its potential application in HIV-1 eradication

    Get PDF
    Despite the success of antiretroviral therapy (ART), eradication of HIV-1 from brain reservoirs remains elusive. HIV-1 brain reservoirs include perivascular macrophages that are behind the blood-brain barrier and difficult to access by ART. Macrophages express transcription factor FOXO3a and the TNF superfamily cytokine TRAIL, which are known to target HIV-1-infected macrophages for viral suppression. ONC201 is a novel and potent FOXO3a activator capable of inducing TRAIL. It can cross the blood-brain barrier, and has shown an antitumor effect in clinical trials. We hypothesized that activation of FOXO3a/TRAIL by ONC201 will reduce the size of HIV-1 brain reservoirs. Using primary human monocyte-derived macrophages, we demonstrated that ONC201 dose-dependently decreased HIV-1 replication levels as determined by HIV-1 reverse transcriptase activity assay and Western blots for p24. Consistent with data on HIV-1 replication, ONC201 also reduced integrated HIV-1 DNA in infected macrophages in two-step Alu-based nested PCR. In addition, the antiviral effect of ONC201 is applicable to different natural HIV strains and to lymphocytes, microglia and latently infected cell line. A combination of ONC201 and AZT achieved a longer and synergistic viral suppression in HIV-1 infected macrophages. The anti-HIV-1 effect of ONC201 was further validated in vivo in NOD/scid-IL-2Rgcnull mice. After intracranial injection of HIV-1-infected macrophages into the basal ganglia, we treated the mice daily with ONC201 through intraperitoneal injection for 6 days. ONC201 significantly decreased p24 levels in both the macrophages and the brain tissues, suggesting that ONC201 suppresses HIV-1 in vivo. As for the mechanisms, ONC201 treatment activated FOXO3a and induced TRAIL expression in macrophages while blocking TRAIL or knockdown of FOXO3a with siRNA reversed ONC201-mediated HIV-1 suppression, suggesting that ONC201 inhibits HIV-1 through FOXO3a and TRAIL. Furthermore, the antiviral effect of ONC201 is associated with apoptosis and autophagy. Based on these in vitro and in vivo studies, ONC201 can be a promising drug candidate to combat persistent HIV-1 infection in the brain reservoirs

    Linear hypothesis testing for high dimensional generalized linear models

    Get PDF
    This paper is concerned with testing linear hypotheses in high dimensional generalized linear models. To deal with linear hypotheses, we first propose the constrained partial regularization method and study its statistical properties. We further introduce an algorithm for solving regularization problems with folded-concave penalty functions and linear constraints. To test linear hypotheses, we propose a partial penalized likelihood ratio test, a partial penalized score test and a partial penalized Wald test. We show that the limiting null distributions of these three test statistics are Ļ‡2 distribution with the same degrees of freedom, and under local alternatives, they asymptotically follow noncentral Ļ‡2 distributions with the same degrees of freedom and noncentral parameter, provided the number of parameters involved in the test hypothesis grows to āˆž at a certain rate. Simulation studies are conducted to examine the finite sample performance of the proposed tests. Empirical analysis of a real data example is used to illustrate the proposed testing procedures

    OccupancyDETR: Making Semantic Scene Completion as Straightforward as Object Detection

    Full text link
    Visual-based 3D semantic occupancy perception (also known as 3D semantic scene completion) is a new perception paradigm for robotic applications like autonomous driving. Compared with Bird's Eye View (BEV) perception, it extends the vertical dimension, significantly enhancing the ability of robots to understand their surroundings. However, due to this very reason, the computational demand for current 3D semantic occupancy perception methods generally surpasses that of BEV perception methods and 2D perception methods. We propose a novel 3D semantic occupancy perception method, OccupancyDETR, which consists of a DETR-like object detection module and a 3D occupancy decoder module. The integration of object detection simplifies our method structurally - instead of predicting the semantics of each voxels, it identifies objects in the scene and their respective 3D occupancy grids. This speeds up our method, reduces required resources, and leverages object detection algorithm, giving our approach notable performance on small objects. We demonstrate the effectiveness of our proposed method on the SemanticKITTI dataset, showcasing an mIoU of 23 and a processing speed of 6 frames per second, thereby presenting a promising solution for real-time 3D semantic scene completion

    AutoMLP: Automated MLP for Sequential Recommendations

    Full text link
    Sequential recommender systems aim to predict users' next interested item given their historical interactions. However, a long-standing issue is how to distinguish between users' long/short-term interests, which may be heterogeneous and contribute differently to the next recommendation. Existing approaches usually set pre-defined short-term interest length by exhaustive search or empirical experience, which is either highly inefficient or yields subpar results. The recent advanced transformer-based models can achieve state-of-the-art performances despite the aforementioned issue, but they have a quadratic computational complexity to the length of the input sequence. To this end, this paper proposes a novel sequential recommender system, AutoMLP, aiming for better modeling users' long/short-term interests from their historical interactions. In addition, we design an automated and adaptive search algorithm for preferable short-term interest length via end-to-end optimization. Through extensive experiments, we show that AutoMLP has competitive performance against state-of-the-art methods, while maintaining linear computational complexity.Comment: Accepted by WWW'2

    Revenue-Sharing Contract Models for Logistics Service Supply Chains with Mass Customization Service

    Get PDF
    The revenue-sharing contract is one of the most important supply chain coordination contracts; it has been applied in various supply chains. However, studies related to service supply chains with mass customization (MC) are lacking. Considering the equity of benefit distribution between the members of service supply chains, in this paper, we designed two revenue-sharing contracts. The first contract for the maximum equity of a single logistics service integrator (LSI) and single functional logistics service provider (FLSP) in a two-echelon logistics service supply chain was designed by introducing the fair entropy function (ā€œone to oneā€ model). Furthermore, the method is extended to a more complex supply chain, which consists of a single LSI and multiple FLSPs. A new contract was designed not only for considering the equity of an LSI and each FLSP but also for the equity between each FLSP (ā€œone to Nā€ model). The ā€œone to oneā€ model in three-echelon LSSC is also provided. The result exemplifies that, whether in the ā€œone to oneā€ model or ā€œone to Nā€ model, there exists a best interval of customized level when the revenue-sharing coefficient reaches its maximum

    Examining the Effect of Pre-training on Time Series Classification

    Full text link
    Although the pre-training followed by fine-tuning paradigm is used extensively in many fields, there is still some controversy surrounding the impact of pre-training on the fine-tuning process. Currently, experimental findings based on text and image data lack consensus. To delve deeper into the unsupervised pre-training followed by fine-tuning paradigm, we have extended previous research to a new modality: time series. In this study, we conducted a thorough examination of 150 classification datasets derived from the Univariate Time Series (UTS) and Multivariate Time Series (MTS) benchmarks. Our analysis reveals several key conclusions. (i) Pre-training can only help improve the optimization process for models that fit the data poorly, rather than those that fit the data well. (ii) Pre-training does not exhibit the effect of regularization when given sufficient training time. (iii) Pre-training can only speed up convergence if the model has sufficient ability to fit the data. (iv) Adding more pre-training data does not improve generalization, but it can strengthen the advantage of pre-training on the original data volume, such as faster convergence. (v) While both the pre-training task and the model structure determine the effectiveness of the paradigm on a given dataset, the model structure plays a more significant role

    FreeAL: Towards Human-Free Active Learning in the Era of Large Language Models

    Full text link
    Collecting high-quality labeled data for model training is notoriously time-consuming and labor-intensive for various NLP tasks. While copious solutions, such as active learning for small language models (SLMs) and prevalent in-context learning in the era of large language models (LLMs), have been proposed and alleviate the labeling burden to some extent, their performances are still subject to human intervention. It is still underexplored how to reduce the annotation cost in the LLMs era. To bridge this, we revolutionize traditional active learning and propose an innovative collaborative learning framework FreeAL to interactively distill and filter the task-specific knowledge from LLMs. During collaborative training, an LLM serves as an active annotator inculcating its coarse-grained knowledge, while a downstream SLM is incurred as a student to filter out high-quality in-context samples to feedback LLM for the subsequent label refinery. Extensive experiments on eight benchmark datasets demonstrate that FreeAL largely enhances the zero-shot performances for both SLM and LLM without any human supervision. The code is available at https://github.com/Justherozen/FreeAL .Comment: Accepted to EMNLP 2023 (Main conference
    • ā€¦
    corecore