1,216 research outputs found

    Sialoglycan – Siglec axis in the modulation of dendritic cells functions

    Get PDF
    Interactions between sialylated glycans and Siglec receptors have been recently described as potential new immune checkpoint that can be targeted to improve anticancer immunity. Myeloid cells have been reported to express a wide range of different Siglecs, however their expression and functions on cancer-associated dendritic cells (DCs) were not fully characterized. We found that classical conventional DCs (cDCs) from cancer patient samples have a high expression of several inhibitory Siglecs including Siglec-7, Siglec-9 and Siglec-10. In subcutaneous murine tumor models, we also found an upregulation of the inhibitory Siglec-E receptor on cancer-associated cDCs. DC cell lines and BMDCs with expression of these inhibitory Siglecs showed impaired maturation states on transcriptome and protein level. Furthermore, ablation of these inhibitory Siglecs from DCs enhanced their capability to prime antigen specific T cells and induce proliferation. Our work provides a deeper understanding of the influence of inhibitory Siglecs on DCs, and reveals a potential new target to improve cancer immunotherapy

    ResidualTransformer: Residual Low-Rank Learning with Weight-Sharing for Transformer Layers

    Full text link
    Memory constraint of always-on devices is one of the major concerns when deploying speech processing models on these devices. While larger models trained with sufficiently large amount of data generally perform better, making them fit in the device memory is a demanding challenge. In this paper, we aim to reduce model size by reparameterizing model weights across Transformer encoder layers and assuming a special weight composition and structure. More specifically, inspired by ResNet and the more recent LoRA work, we propose an approach named ResidualTransformer, where each weight matrix in a Transformer layer comprises 1) a shared full-rank component with its adjacent layers, and 2) a unique low-rank component to itself. The low-rank matrices only account for a small amount of model size increase. In addition, we add diagonal weight matrices to improve modeling capacity of the low-rank matrices. Experiments of our 10k-hour speech recognition and speech translation tasks show that the Transformer encoder size can be reduced by ~3X with very slight performance degradation.Comment: Accepted at IEEE ICASSP 2024. 5 pages, 1 figur

    Improved sensitivity in ellipsometry of thin biochemical films by employing sublayers

    Get PDF
    Journal ArticleEllipsometry is widely used for investigating the optical properties of thin films on planar substrates, including films of adsorbed proteins or polymers. The average thickness and effective refractive index of the adsorbed layer are calculated by measuring the A and ¥ ellipsometry parameters. Unfortunately the thickness of the adsorbed protein layers is often too thin to significantly affect the A and Y parameters. However, using a substructure consisting of an additional sublayer placed between the substrate and the adsorbed layer, we can improve the sensitivities of both A and 4* to changes in the adsorbed layer, provided that the thickness of the sublayer is optimized. We show that for a Si02 layer on a Si wafer, the optimum Si02 thickness is about 1350 A when the incident angle is 70 degrees and the wavelength is 6328 A. The materials of the sublayer can be metal, semiconductor and/or dielectric
    • …
    corecore