170 research outputs found

    Nonoscillatory solutions for super-linear Emden-Fowler type dynamic equations on time scales

    Get PDF
    In this paper, we consider the following Emden-Fowler type dynamic equations on time scales \begin{equation*} \big(a(t)|x^\Delta(t)|^\alpha \operatorname{sgn} x^\Delta(t)\big)^\Delta+b(t)|x(t)|^\beta \operatorname{sgn}x(t)=0, \end{equation*} when α<β\alpha<\beta. The classification of the nonoscillatory solutions are investigated and some necessary and sufficient conditions of the existence of oscillatory and nonoscillatory solutions are given by using the Schauder-Tychonoff fixed point theorem. Three possibilities of two classes of double integrals which are not only related to the coefficients of the equation but also linked with the classification of the nonoscillatory solutions and oscillation of solutions are put forward. Moreover, an important property of the intermediate solutions on time scales is indicated. At last, an example is given to illustrate our main results

    On the Mathematics of RNA Velocity II: Algorithmic Aspects

    Full text link
    In a previous paper [CSIAM Trans. Appl. Math. 2 (2021), 1-55], the authors proposed a theoretical framework for the analysis of RNA velocity, which is a promising concept in scRNA-seq data analysis to reveal the cell state-transition dynamical processes underlying snapshot data. The current paper is devoted to the algorithmic study of some key components in RNA velocity workflow. Four important points are addressed in this paper: (1) We construct a rational time-scale fixation method which can determine the global gene-shared latent time for cells. (2) We present an uncertainty quantification strategy for the inferred parameters obtained through the EM algorithm. (3) We establish the optimal criterion for the choice of velocity kernel bandwidth with respect to the sample size in the downstream analysis and discuss its implications. (4) We propose a temporal distance estimation approach between two cell clusters along the cellular development path. Some illustrative numerical tests are also carried out to verify our analysis. These results are intended to provide tools and insights in further development of RNA velocity type methods in the future.Comment: 32 pages, 5 figure

    A novel Toxoplasma gondii TGGT1_316290 mRNA-LNP vaccine elicits protective immune response against toxoplasmosis in mice

    Get PDF
    Toxoplasma gondii (T. gondii) can infect almost all warm-blooded animals and is a major threat to global public health. Currently, there is no effective drug or vaccine for T. gondii. In this study, bioinformatics analysis on B and T cell epitopes revealed that TGGT1_316290 (TG290) had superior effects compared with the surface antigen 1 (SAG1). TG290 mRNA-LNP was constructed through the Lipid Nanoparticle (LNP) technology and intramuscularly injected into the BALB/c mice, and its immunogenicity and efficacy were explored. Analysis of antibodies, cytokines (IFN-γ, IL-12, IL-4, and IL-10), lymphocytes proliferation, cytotoxic T lymphocyte activity, dendritic cell (DC) maturation, as well as CD4+ and CD8+ T lymphocytes revealed that TG290 mRNA-LNP induced humoral and cellular immune responses in vaccinated mice. Furthermore, T-Box 21 (T-bet), nuclear factor kappa B (NF-kB) p65, and interferon regulatory factor 8 (IRF8) subunit were over-expressed in the TG290 mRNA-LNP-immunized group. The survival time of mice injected with TG290 mRNA-LNP was significantly longer (18.7 ± 3 days) compared with the survival of mice of the control groups (p &lt; 0.0001). In addition, adoptive immunization using 300 μl serum and lymphocytes (5*107) of mice immunized with TG290 mRNA-LNP significantly prolonged the survival time of these mice. This study demonstrates that TG290 mRNA-LNP induces specific immune response against T. gondii and may be a potential toxoplasmosis vaccine candidate for this infection

    Unmasked Teacher: Towards Training-Efficient Video Foundation Models

    Full text link
    Video Foundation Models (VFMs) have received limited exploration due to high computational costs and data scarcity. Previous VFMs rely on Image Foundation Models (IFMs), which face challenges in transferring to the video domain. Although VideoMAE has trained a robust ViT from limited data, its low-level reconstruction poses convergence difficulties and conflicts with high-level cross-modal alignment. This paper proposes a training-efficient method for temporal-sensitive VFMs that integrates the benefits of existing methods. To increase data efficiency, we mask out most of the low-semantics video tokens, but selectively align the unmasked tokens with IFM, which serves as the UnMasked Teacher (UMT). By providing semantic guidance, our method enables faster convergence and multimodal friendliness. With a progressive pre-training framework, our model can handle various tasks including scene-related, temporal-related, and complex video-language understanding. Using only public sources for pre-training in 6 days on 32 A100 GPUs, our scratch-built ViT-L/16 achieves state-of-the-art performances on various video tasks. The code and models will be released at https://github.com/OpenGVLab/unmasked_teacher.Comment: 16 pages, 5 figures, 28 table

    Harvest Video Foundation Models via Efficient Post-Pretraining

    Full text link
    Building video-language foundation models is costly and difficult due to the redundant nature of video data and the lack of high-quality video-language datasets. In this paper, we propose an efficient framework to harvest video foundation models from image ones. Our method is intuitively simple by randomly dropping input video patches and masking out input text during the post-pretraining procedure. The patch dropping boosts the training efficiency significantly and text masking enforces the learning of cross-modal fusion. We conduct extensive experiments to validate the effectiveness of our method on a wide range of video-language downstream tasks including various zero-shot tasks, video question answering, and video-text retrieval. Despite its simplicity, our method achieves state-of-the-art performances, which are comparable to some heavily pretrained video foundation models. Our method is extremely efficient and can be trained in less than one day on 8 GPUs, requiring only WebVid-10M as pretraining data. We hope our method can serve as a simple yet strong counterpart for prevalent video foundation models, provide useful insights when building them, and make large pretrained models more accessible and sustainable. This is part of the InternVideo project \url{https://github.com/OpenGVLab/InternVideo}

    InternVid: A Large-scale Video-Text Dataset for Multimodal Understanding and Generation

    Full text link
    This paper introduces InternVid, a large-scale video-centric multimodal dataset that enables learning powerful and transferable video-text representations for multimodal understanding and generation. The InternVid dataset contains over 7 million videos lasting nearly 760K hours, yielding 234M video clips accompanied by detailed descriptions of total 4.1B words. Our core contribution is to develop a scalable approach to autonomously build a high-quality video-text dataset with large language models (LLM), thereby showcasing its efficacy in learning video-language representation at scale. Specifically, we utilize a multi-scale approach to generate video-related descriptions. Furthermore, we introduce ViCLIP, a video-text representation learning model based on ViT-L. Learned on InternVid via contrastive learning, this model demonstrates leading zero-shot action recognition and competitive video retrieval performance. Beyond basic video understanding tasks like recognition and retrieval, our dataset and model have broad applications. They are particularly beneficial for generating interleaved video-text data for learning a video-centric dialogue system, advancing video-to-text and text-to-video generation research. These proposed resources provide a tool for researchers and practitioners interested in multimodal video understanding and generation.Comment: Data and Code: https://github.com/OpenGVLab/InternVideo/tree/main/Data/InternVi
    • …
    corecore