104 research outputs found

    Generalized and Incremental Few-Shot Learning by Explicit Learning and Calibration without Forgetting

    Get PDF

    Leveraging Self-Supervised Training for Unintentional Action Recognition

    Get PDF
    Unintentional actions are rare occurrences that are difficult to defineprecisely and that are highly dependent on the temporal context of the action.In this work, we explore such actions and seek to identify the points in videoswhere the actions transition from intentional to unintentional. We propose amulti-stage framework that exploits inherent biases such as motion speed,motion direction, and order to recognize unintentional actions. To enhancerepresentations via self-supervised training for the task of unintentionalaction recognition we propose temporal transformations, called TemporalTransformations of Inherent Biases of Unintentional Actions (T2IBUA). Themulti-stage approach models the temporal information on both the level ofindividual frames and full clips. These enhanced representations show strongperformance for unintentional action recognition tasks. We provide an extensiveablation study of our framework and report results that significantly improveover the state-of-the-art.<br

    SSB: Simple but Strong Baseline for Boosting Performance of Open-Set Semi-Supervised Learning

    Full text link
    Semi-supervised learning (SSL) methods effectively leverage unlabeled data to improve model generalization. However, SSL models often underperform in open-set scenarios, where unlabeled data contain outliers from novel categories that do not appear in the labeled set. In this paper, we study the challenging and realistic open-set SSL setting, where the goal is to both correctly classify inliers and to detect outliers. Intuitively, the inlier classifier should be trained on inlier data only. However, we find that inlier classification performance can be largely improved by incorporating high-confidence pseudo-labeled data, regardless of whether they are inliers or outliers. Also, we propose to utilize non-linear transformations to separate the features used for inlier classification and outlier detection in the multi-task learning framework, preventing adverse effects between them. Additionally, we introduce pseudo-negative mining, which further boosts outlier detection performance. The three ingredients lead to what we call Simple but Strong Baseline (SSB) for open-set SSL. In experiments, SSB greatly improves both inlier classification and outlier detection performance, outperforming existing methods by a large margin. Our code will be released at https://github.com/YUE-FAN/SSB.Comment: Paper accepted in ICCV 202

    EFFECT OF KAOLIN ADDITION INTO METAKAOLIN GEOPOLYMER COMPOSITE

    Get PDF
    Industrially produced metakaolin may contain raw kaolin residues. Therefore, the aim of this work was to determine the impact of kaolin remains on the metakaolin and the final geopolymer quality. A series of mixtures based on metakaolin (Mefisto L05 by CLUZ Nove Straseci, Czech Republic) was prepared with the 0-60 wt% gradual addition of raw kaolin, and the mechanical strength of the final geopolymer products was tested. It was found that up to a 20 wt. % amount of kaolin in metakaolin does not weaken the geopolymer’s performance. Moreover, a geopolymer made of metakaolin with 2-4 wt% of kaolin showed slightly better mechanical properties than the geopolymers made from metakaolin itself

    Adhesines of the Plague Agent

    Get PDF
    Plague agent has a complex of adhesines providing for anchoring of the pathogen to target cells in a host organism and in many ways defining the onset, character, and development of the disease. The presence of adhesines ensures translocation of effector proteins into target cells of mammalians. The review covers the literature data, both on the most studied Yersinia pestis adhesines (Ail proteins and pH6 antigen), and on recently identified auto transporting proteins of various classes, involved in adhesion processes (YadBC, Yaps, IlpP). Their significance for plague pathogenesis, genetic determinacy, structure and localization in a cell are also described in the paper. It is noted that plague agent adhesines work at different phases of infection process, have multiple functions and take part not only in anchoring to host cells, but provide for resistance to influence of immune mechanisms of a host too

    In-Style: Bridging Text and Uncurated Videos with Style Transfer for Text-Video Retrieval

    Full text link
    Large-scale noisy web image-text datasets have been proven to be efficient for learning robust vision-language models. However, when transferring them to the task of video retrieval, models still need to be fine-tuned on hand-curated paired text-video data to adapt to the diverse styles of video descriptions. To address this problem without the need for hand-annotated pairs, we propose a new setting, text-video retrieval with uncurated & unpaired data, that during training utilizes only text queries together with uncurated web videos without any paired text-video data. To this end, we propose an approach, In-Style, that learns the style of the text queries and transfers it to uncurated web videos. Moreover, to improve generalization, we show that one model can be trained with multiple text styles. To this end, we introduce a multi-style contrastive training procedure that improves the generalizability over several datasets simultaneously. We evaluate our model on retrieval performance over multiple datasets to demonstrate the advantages of our style transfer framework on the new task of uncurated & unpaired text-video retrieval and improve state-of-the-art performance on zero-shot text-video retrieval.Comment: Published at ICCV 2023, code: https://github.com/ninatu/in_styl

    Temperature Schedules for Self-Supervised Contrastive Methods on Long-Tail Data

    Full text link
    Most approaches for self-supervised learning (SSL) are optimised on curated balanced datasets, e.g. ImageNet, despite the fact that natural data usually exhibits long-tail distributions. In this paper, we analyse the behaviour of one of the most popular variants of SSL, i.e. contrastive methods, on long-tail data. In particular, we investigate the role of the temperature parameter τ\tau in the contrastive loss, by analysing the loss through the lens of average distance maximisation, and find that a large τ\tau emphasises group-wise discrimination, whereas a small τ\tau leads to a higher degree of instance discrimination. While τ\tau has thus far been treated exclusively as a constant hyperparameter, in this work, we propose to employ a dynamic τ\tau and show that a simple cosine schedule can yield significant improvements in the learnt representations. Such a schedule results in a constant `task switching' between an emphasis on instance discrimination and group-wise discrimination and thereby ensures that the model learns both group-wise features, as well as instance-specific details. Since frequent classes benefit from the former, while infrequent classes require the latter, we find this method to consistently improve separation between the classes in long-tail data without any additional computational cost.Comment: ICLR 202
    corecore