178 research outputs found

    The chemokine and chemokine receptor superfamilies and their molecular evolution

    Get PDF
    The human chemokine superfamily currently includes at least 46 ligands, which bind to 18 functionally signaling G-protein-coupled receptors and two decoy or scavenger receptors. The chemokine ligands probably comprise one of the first completely known molecular superfamilies. The genomic organization of the chemokine ligand genes and a comparison of their sequences between species shows that tandem gene duplication has taken place independently in the mouse and human lineages of some chemokine families. This means that care needs to be taken when extrapolating experimental results on some chemokines from mouse to human

    PillarNeSt: Embracing Backbone Scaling and Pretraining for Pillar-based 3D Object Detection

    Full text link
    This paper shows the effectiveness of 2D backbone scaling and pretraining for pillar-based 3D object detectors. Pillar-based methods mainly employ randomly initialized 2D convolution neural network (ConvNet) for feature extraction and fail to enjoy the benefits from the backbone scaling and pretraining in the image domain. To show the scaling-up capacity in point clouds, we introduce the dense ConvNet pretrained on large-scale image datasets (e.g., ImageNet) as the 2D backbone of pillar-based detectors. The ConvNets are adaptively designed based on the model size according to the specific features of point clouds, such as sparsity and irregularity. Equipped with the pretrained ConvNets, our proposed pillar-based detector, termed PillarNeSt, outperforms the existing 3D object detectors by a large margin on the nuScenes and Argoversev2 datasets. Our code shall be released upon acceptance

    Malignant transformation of diffuse infiltrating glial neoplasm after prolonged stable period initially discovered with hypothalamic hamartoma

    Get PDF
    We present a case of malignant transformation of diffuse infiltrating glial neoplasm after a prolonged stable period on magnetic resonance imaging (MRI) and spectroscopy (MRS) initially discovered with a hypothalamic hamartoma. Although MRI and MRS suggest the possibility of malignant transformation in future, they cannot precisely predict the timing of rapid growth

    Vision Learners Meet Web Image-Text Pairs

    Full text link
    Most recent self-supervised learning methods are pre-trained on the well-curated ImageNet-1K dataset. In this work, given the excellent scalability of web data, we consider self-supervised pre-training on noisy web sourced image-text paired data. First, we conduct a benchmark study of representative self-supervised pre-training methods on large-scale web data in a like-for-like setting. We compare a range of methods, including single-modal ones that use masked training objectives and multi-modal ones that use image-text constrastive training. We observe that existing multi-modal methods do not outperform their single-modal counterparts on vision transfer learning tasks. We derive an information-theoretical view to explain these benchmark results, which provides insight into how to design a novel vision learner. Inspired by this insight, we present a new visual representation pre-training method, MUlti-modal Generator~(MUG), that learns from scalable web sourced image-text data. MUG achieves state-of-the-art transfer performance on a variety of tasks and demonstrates promising scaling properties. Pre-trained models and code will be made public upon acceptance.Comment: Project page: https://bzhao.me/MUG
    • …
    corecore