533 research outputs found

    APOE-ε4 genes may accelerate the activation of the latent form of HSV-1 which would lead to a faster progression of AD

    Get PDF
    This study investigates the impact of APOE alleles and latent Herpes Simplex Type 1 virus (HSV-1) activation on Alzheimer’s disease (AD) progression using the 5xFAD mouse model. APOE ε4 is recognized as a substantial genetic risk factor for sporadic AD, while HSV-1 has been linked to AD pathogenesis through inflammation and plaque formation. The experimental approach involves the introduction of human neurons carrying latent HSV-1 into 5xFAD mice harboring various APOE alleles (APOE2, APOE3, APOE4), along with stress induction and pharmacological interventions. The study aims to elucidate the combined impact of these variables on AD progression and the formation of Aβ plaques. Our anticipated results suggest that APOE ε4 may accelerate AD development, especially in conjunction with HSV-1 activation, while APOE ε2 may exert a mitigating influence. These findings have the potential to advance our understanding of the intricate mechanisms underpinning AD and provide insights into potential therapeutic approaches. Further exploration of these interactions could offer critical insights into the pursuit of effective AD treatments

    Improved production of chlorogenic acid from cell suspension cultures of Lonicera macranthoids

    Get PDF
    Purpose: To evaluate the potential of Lonicera macranthoides Hand. -Mazz. Yulei1 suspension culture system for enhanced production of the main secondary metabolite, chlorogenic acid.Methods: The callus of L. macranthoides Hand.-Mazz. “Yulei1” was suspension cultured in B5 liquid medium supplemented with different plant growth regulators. Biomass accumulation was calculated by weight method and chlorogenic acid production was measured using high performance liquid chromatography (HPLC). HPLC was carried out on C18 analytical column at 35 °C and the detection wavelength was set at 324 nm.Results: The results showed that maximum accumulation of biomass and chlorogenic acid were achieved 15 days after culture growth. The optimized conditions for biomass accumulation and chlorogenic acid production were 50 g/L of inoculum on fresh weight basis, B5 medium supplemented with plant growth regulators, 30 - 40 g/L sucrose and initial medium pH of 5.5. Maximum accumulation of chlorogenic acid and biomass were observed when the culture medium was supplemented with 2.0 mg/L6-BA. Optimal accumulation of chlorogenic acid was observed using combination of hormones 2.0 mg/L 6-Benzyladenine (BA) + 0.5 mg/L2, 4-Dichlorophenoxyacetic acid (2,4-D), while optimal accumulation of biomass was observed with 2.0 mg/L 6-BA + 2.0 mg/L2, 4-D. In addition, phenylalanine also contributed to the synthesis of chlorogenic acid at a concentration > 50 mg/L.Conclusion: Cell suspension cultures of L. macranthoides Hand.-Mazz. “Yulei1” have successfully been established. The findings provide a potential basis for large scale production of chlorogenic acid using cell suspension cultures of L. macranthoides.Keywords: Lonicera macranthoides, Cell suspension culture, Chlorogenic acid, Phenylalanine, Optimizatio

    Jointly Modeling Heterogeneous Student Behaviors and Interactions Among Multiple Prediction Tasks

    Full text link
    Prediction tasks about students have practical significance for both student and college. Making multiple predictions about students is an important part of a smart campus. For instance, predicting whether a student will fail to graduate can alert the student affairs office to take predictive measures to help the student improve his/her academic performance. With the development of information technology in colleges, we can collect digital footprints which encode heterogeneous behaviors continuously. In this paper, we focus on modeling heterogeneous behaviors and making multiple predictions together, since some prediction tasks are related and learning the model for a specific task may have the data sparsity problem. To this end, we propose a variant of LSTM and a soft-attention mechanism. The proposed LSTM is able to learn the student profile-aware representation from heterogeneous behavior sequences. The proposed soft-attention mechanism can dynamically learn different importance degrees of different days for every student. In this way, heterogeneous behaviors can be well modeled. In order to model interactions among multiple prediction tasks, we propose a co-attention mechanism based unit. With the help of the stacked units, we can explicitly control the knowledge transfer among multiple tasks. We design three motivating behavior prediction tasks based on a real-world dataset collected from a college. Qualitative and quantitative experiments on the three prediction tasks have demonstrated the effectiveness of our model

    Model Predictive Control of NCS with Data Quantization and Bounded Arbitrary Time Delays

    Get PDF
    The model predictive control for constrained discrete time linear system under network environment is considered. The bounded time delay and data quantization are assumed to coexist in the data transmission link from the sensor to the controller. A novel NCS model is specially established for the model predictive control method, which casts the time delay and data quantization into a unified framework. A stability result of the obtained closed-loop model is presented by applying the Lyapunov method, which plays a key role in synthesizing the model predictive controller. The model predictive controller, which parameterizes the infinite horizon control moves into a single state feedback law, is provided which explicitly considers the satisfaction of input and state constraints. Two numerical examples are given to illustrate the effectiveness of the derived method

    H-VFI: Hierarchical Frame Interpolation for Videos with Large Motions

    Full text link
    Capitalizing on the rapid development of neural networks, recent video frame interpolation (VFI) methods have achieved notable improvements. However, they still fall short for real-world videos containing large motions. Complex deformation and/or occlusion caused by large motions make it an extremely difficult problem in video frame interpolation. In this paper, we propose a simple yet effective solution, H-VFI, to deal with large motions in video frame interpolation. H-VFI contributes a hierarchical video interpolation transformer (HVIT) to learn a deformable kernel in a coarse-to-fine strategy in multiple scales. The learnt deformable kernel is then utilized in convolving the input frames for predicting the interpolated frame. Starting from the smallest scale, H-VFI updates the deformable kernel by a residual in succession based on former predicted kernels, intermediate interpolated results and hierarchical features from transformer. Bias and masks to refine the final outputs are then predicted by a transformer block based on interpolated results. The advantage of such a progressive approximation is that the large motion frame interpolation problem can be decomposed into several relatively simpler sub-tasks, which enables a very accurate prediction in the final results. Another noteworthy contribution of our paper consists of a large-scale high-quality dataset, YouTube200K, which contains videos depicting a great variety of scenarios captured at high resolution and high frame rate. Extensive experiments on multiple frame interpolation benchmarks validate that H-VFI outperforms existing state-of-the-art methods especially for videos with large motions

    GPT Understands, Too

    Full text link
    While GPTs with traditional fine-tuning fail to achieve strong results on natural language understanding (NLU), we show that GPTs can be better than or comparable to similar-sized BERTs on NLU tasks with a novel method P-tuning -- which employs trainable continuous prompt embeddings. On the knowledge probing (LAMA) benchmark, the best GPT recovers 64\% (P@1) of world knowledge without any additional text provided during test time, which substantially improves the previous best by 20+ percentage points. On the SuperGlue benchmark, GPTs achieve comparable and sometimes better performance to similar-sized BERTs in supervised learning. Importantly, we find that P-tuning also improves BERTs' performance in both few-shot and supervised settings while largely reducing the need for prompt engineering. Consequently, P-tuning outperforms the state-of-the-art approaches on the few-shot SuperGlue benchmark
    corecore