18 research outputs found

    Test of Time: Instilling Video-Language Models with a Sense of Time

    Get PDF
    Modelling and understanding time remains a challenge in contemporary video understanding models. With language emerging as a key driver towards powerful generalization, it is imperative for foundational video-language models to have a sense of time. In this paper, we consider a specific aspect of temporal understanding: consistency of time order as elicited by before/after relations. We establish that seven existing video-language models struggle to understand even such simple temporal relations. We then question whether it is feasible to equip these foundational models with temporal awareness without re-training them from scratch. Towards this, we propose a temporal adaptation recipe on top of one such model, VideoCLIP, based on post-pretraining on a small amount of video-text data. We conduct a zero-shot evaluation of the adapted models on six datasets for three downstream tasks which require varying degrees of time awareness. We observe encouraging performance gains especially when the task needs higher time awareness. Our work serves as a first step towards probing and instilling a sense of time in existing video-language models without the need for data and compute-intense training from scratch.Comment: Accepted for publication at CVPR 2023. Project page: https://bpiyush.github.io/testoftime-website/index.htm

    Poly(n-butylcyanoacrylate) nanoparticles for oral delivery of quercetin: preparation, characterization, and pharmacokinetics and biodistribution studies in Wistar rats

    No full text
    Mayur Bagad, Zaved Ahmed KhanMedical Biotechnology Division, School of Biosciences and Technology, VIT University, Vellore Tamil Nadu, IndiaBackground: Quercetin (QT) is a potential bioflavonol and antioxidant with poor bioavailability and very low distribution in the brain. A new oral delivery system comprising of poly(n-butylcyanoacrylate) nanoparticles (PBCA NPs) was introduced to improve the oral bioavailability of QT and to increase its distribution in the brain. Physicochemical characteristics, in vitro release, stability in simulated gastric fluid and intestinal fluids, and pharmacokinetics and biodistribution studies of QT-PBCA NPs coated with polysorbate-80 (P-80) were investigated.Objective: This study aimed to investigate the physicochemical characteristics, in vitro release, stability in simulated gastric fluid and intestinal fluids, and pharmacokinetics and biodistribution studies of QT-PBCA NPs coated with polysorbate-80 (P-80).Results: The results showed that QT-PBCA NPs and QT-PBCA NPs coated with P-80 (QT-PBCA+P-80) had mean particle sizes of 161.1±0.44 nm and 166.6±0.33 nm respectively, and appeared spherical in shape under transmission electron microscopy. The mean entrapment efficiency was 79.86%±0.45% for QT-PBCA NPs and 74.58%±1.44% for QT-PBCA+P-80. The in vitro release of QT-PBCA NPs and QT-PBCA+P-80 showed an initial burst release followed by a sustained release when compared to free QT. The relative bioavailability of QT-PBCA NPs and QT-PBCA+P-80 enhanced QT bioavailability by 2.38- and 4.93-fold respectively, when compared to free QT. The biodistribution study in rats showed that a higher concentration of QT was detected in the brain after the NPs were coated with P-80.Conclusion: This study indicates that PBCA NPs coated with P-80 can be potential drug carriers for poorly water-soluble drugs. These NPs were observed to improve the drugs’ oral bioavailability and enhance their transport to the brain.Keywords: bioavailability, biodistribution, nanoparticles, pharmacokinetics, poly (n-butylcyanoacrylate), querceti

    How Severe is Benchmark-Sensitivity in Video Self-Supervised Learning?

    No full text
    Despite the recent success of video self-supervised learning models, there is much still to be understood about their generalization capability. In this paper, we investigate how sensitive video self-supervised learning is to the current conventional benchmark and whether methods generalize beyond the canonical evaluation setting. We do this across four different factors of sensitivity: domain, samples, actions and task. Our study which encompasses over 500 experiments on 7 video datasets, 9 self-supervised methods and 6 video understanding tasks, reveals that current benchmarks in video self-supervised learning are not good indicators of generalization along these sensitivity factors. Further, we find that self-supervised methods considerably lag behind vanilla supervised pre-training, especially when domain shift is large and the amount of available downstream samples are low. From our analysis we distill the SEVERE-benchmark, a subset of our experiments, and discuss its implication for evaluating the generalizability of representations obtained by existing and future self-supervised video learning methods. Code is available at https://github.com/fmthoker/SEVERE-BENCHMARK

    Assessment of IS Integration Efforts to Implement the Internet of Production Reference Architecture

    No full text
    Part 9: Information Systems IntegrationInternational audienceAs part of a collaborative network, manufacturing companies are required to be agile and accelerate their decision making. To do so, a high amount of data is available and needs to be utilized. To enable this from a company internal information system perspective, the Internet of Production (IoP) describes a future information system (IS) architecture. Core element of the IoP is a digital platform building the basis for a network of cognitive systems. To implement and continuously further develop the IoP, manufacturing companies need to make architecture-related decisions concerning the accessibility of data, the processing of the data as well as the visualization of the information. The goal of this research is the development of a decision-support methodology to make those decisions, taking under consideration the evaluated IS integration effort. Therefore, this paper describes the allocation of IS functions and identifies the effort drivers for the respective IS integration by analyzing the integration possibilities. Conclusively this approach will be validated in a case study
    corecore