103 research outputs found

    Association Between Sars-Cov-2 Reinfections And Anti-S Antibody Levels At The First Omicron Wave In Salvador, Brazil

    Get PDF
    Background: SARS-CoV-2, with its high transmissibility and rapid dissemination, has caused a global public health emergency. The emergence of new variants and mutations of SARSCoV-2 spike protein antigens has led to concerns about immune escape and the potential for reinfection, even in individuals who have been previously infected or vaccinated. Brazil has been severely affected by the pandemic, especially in its densely populated slum areas. Our study aimed to evaluate the association between anti-S IgG antibody levels and subsequent SARS-CoV-2 infection during the Omicron wave in a susceptible community in Salvador, Brazil, to provide insight into the antibody level necessary for effective protection against infection with heterologous variants in similar settings. Methods and findings: We conducted this study in a cohort of 1827 residents of Pau da Lima, Salvador, Brazil. We measured serum levels of IgG against the SARS-CoV-2 Spike protein between July and November 2021. From November 2021 to February 2022, during the first Omicron wave, we performed symptom-based screening and PCR testing to identify new infections. We used logistic regression to estimate the association between antibody levels and subsequent PCR-confirmed infection. Among 210 individuals in the cohort who underwent PCR testing, we did not identify any association between antibody levels and PCR-confirmed infection. Among a subset of 84 individuals who did not receive vaccination between the time of antibody measurement and the time of PCR testing, higher antibody levels were associated with increased odds of PCR-confirmed infection. Conclusion: We did not identify a protective effect of serum anti-S IgG levels on subsequent risk of infection during the Omicron wave. Further studies could address limitations of our study (sample size, confounding) and evaluate the effect of variant-specific antibodie

    Accelerating Toeplitz Neural Network with Constant-time Inference Complexity

    Full text link
    Toeplitz Neural Networks (TNNs) have exhibited outstanding performance in various sequence modeling tasks. They outperform commonly used Transformer-based models while benefiting from log-linear space-time complexities. On the other hand, State Space Models (SSMs) achieve lower performance than TNNs in language modeling but offer the advantage of constant inference complexity. In this paper, we aim to combine the strengths of TNNs and SSMs by converting TNNs to SSMs during inference, thereby enabling TNNs to achieve the same constant inference complexities as SSMs. To accomplish this, we formulate the conversion process as an optimization problem and provide a closed-form solution. We demonstrate how to transform the target equation into a Vandermonde linear system problem, which can be efficiently solved using the Discrete Fourier Transform (DFT). Notably, our method requires no training and maintains numerical stability. It can be also applied to any LongConv-based model. To assess its effectiveness, we conduct extensive experiments on language modeling tasks across various settings. Additionally, we compare our method to other gradient-descent solutions, highlighting the superior numerical stability of our approach. The source code is available at https://github.com/OpenNLPLab/ETSC-Exact-Toeplitz-to-SSM-Conversion.Comment: Accepted to EMNLP 2023. Yiran Zhong is the corresponding author. The source code is available at https://github.com/OpenNLPLab/ETSC-Exact-Toeplitz-to-SSM-Conversio

    Bibliometric and visualization analysis of research trend in mental health problems of children and adolescents during the COVID-19 pandemic

    Get PDF
    ObjectivesTo analyze the evolution of research on children and adolescents mental health issues during COVID-19 pandemic and discuss research hotspots and cutting-edge developments.MethodsThe literature obtained from the web of science core collection as of June 28, 2022, was analyzed using Citespace, VOSviewer bibliometric visualization mapping software.ResultsA total of 6,039 relevant papers were found, of which 5,594 were included in the study. The number of literatures is growing since 2020; and the country, institution, and journal publications were analyzed. The co-citation analysis shows that there are more research articles among the highly cited articles and a lack of systematic reviews that use critical thinking for review. In the cluster analysis, mental health and life change were the most representative. The timeline view of the keywords shows that Online learning (#0), Public health (#1), and Mental health (#2) are the three largest clusters and shows the change over time.ConclusionThis study helped analyze the mental health of children and adolescents during the COVID-19 pandemic and identified hot trends and shortcomings, which are important references for the theoretical basis of future research and decision making and technical guidance for systematic reviews

    SupFusion: Supervised LiDAR-Camera Fusion for 3D Object Detection

    Full text link
    In this paper, we propose a novel training strategy called SupFusion, which provides an auxiliary feature level supervision for effective LiDAR-Camera fusion and significantly boosts detection performance. Our strategy involves a data enhancement method named Polar Sampling, which densifies sparse objects and trains an assistant model to generate high-quality features as the supervision. These features are then used to train the LiDAR-Camera fusion model, where the fusion feature is optimized to simulate the generated high-quality features. Furthermore, we propose a simple yet effective deep fusion module, which contiguously gains superior performance compared with previous fusion methods with SupFusion strategy. In such a manner, our proposal shares the following advantages. Firstly, SupFusion introduces auxiliary feature-level supervision which could boost LiDAR-Camera detection performance without introducing extra inference costs. Secondly, the proposed deep fusion could continuously improve the detector's abilities. Our proposed SupFusion and deep fusion module is plug-and-play, we make extensive experiments to demonstrate its effectiveness. Specifically, we gain around 2% 3D mAP improvements on KITTI benchmark based on multiple LiDAR-Camera 3D detectors.Comment: Accepted to ICCV202

    All-pairs Consistency Learning for Weakly Supervised Semantic Segmentation

    Full text link
    In this work, we propose a new transformer-based regularization to better localize objects for Weakly supervised semantic segmentation (WSSS). In image-level WSSS, Class Activation Map (CAM) is adopted to generate object localization as pseudo segmentation labels. To address the partial activation issue of the CAMs, consistency regularization is employed to maintain activation intensity invariance across various image augmentations. However, such methods ignore pair-wise relations among regions within each CAM, which capture context and should also be invariant across image views. To this end, we propose a new all-pairs consistency regularization (ACR). Given a pair of augmented views, our approach regularizes the activation intensities between a pair of augmented views, while also ensuring that the affinity across regions within each view remains consistent. We adopt vision transformers as the self-attention mechanism naturally embeds pair-wise affinity. This enables us to simply regularize the distance between the attention matrices of augmented image pairs. Additionally, we introduce a novel class-wise localization method that leverages the gradients of the class token. Our method can be seamlessly integrated into existing WSSS methods using transformers without modifying the architectures. We evaluate our method on PASCAL VOC and MS COCO datasets. Our method produces noticeably better class localization maps (67.3% mIoU on PASCAL VOC train), resulting in superior WSSS performances.Comment: ICCV 2023 worksho

    Linearized Relative Positional Encoding

    Full text link
    Relative positional encoding is widely used in vanilla and linear transformers to represent positional information. However, existing encoding methods of a vanilla transformer are not always directly applicable to a linear transformer, because the latter requires a decomposition of the query and key representations into separate kernel functions. Nevertheless, principles for designing encoding methods suitable for linear transformers remain understudied. In this work, we put together a variety of existing linear relative positional encoding approaches under a canonical form and further propose a family of linear relative positional encoding algorithms via unitary transformation. Our formulation leads to a principled framework that can be used to develop new relative positional encoding methods that preserve linear space-time complexity. Equipped with different models, the proposed linearized relative positional encoding (LRPE) family derives effective encoding for various applications. Experiments show that compared with existing methods, LRPE achieves state-of-the-art performance in language modeling, text classification, and image classification. Meanwhile, it emphasizes a general paradigm for designing broadly more relative positional encoding methods that are applicable to linear transformers. The code is available at https://github.com/OpenNLPLab/Lrpe.Comment: Reviewed by TMLR, decision pending. Yiran Zhong is the corresponding author. Code is available at https://github.com/OpenNLPLab/Lrp

    Fine-grained Audible Video Description

    Full text link
    We explore a new task for audio-visual-language modeling called fine-grained audible video description (FAVD). It aims to provide detailed textual descriptions for the given audible videos, including the appearance and spatial locations of each object, the actions of moving objects, and the sounds in videos. Existing visual-language modeling tasks often concentrate on visual cues in videos while undervaluing the language and audio modalities. On the other hand, FAVD requires not only audio-visual-language modeling skills but also paragraph-level language generation abilities. We construct the first fine-grained audible video description benchmark (FAVDBench) to facilitate this research. For each video clip, we first provide a one-sentence summary of the video, ie, the caption, followed by 4-6 sentences describing the visual details and 1-2 audio-related descriptions at the end. The descriptions are provided in both English and Chinese. We create two new metrics for this task: an EntityScore to gauge the completeness of entities in the visual descriptions, and an AudioScore to assess the audio descriptions. As a preliminary approach to this task, we propose an audio-visual-language transformer that extends existing video captioning model with an additional audio branch. We combine the masked language modeling and auto-regressive language modeling losses to optimize our model so that it can produce paragraph-level descriptions. We illustrate the efficiency of our model in audio-visual-language modeling by evaluating it against the proposed benchmark using both conventional captioning metrics and our proposed metrics. We further put our benchmark to the test in video generation models, demonstrating that employing fine-grained video descriptions can create more intricate videos than using captions.Comment: accpeted to CVPR 2023, Xuyang Shen, Dong Li and Jinxing Zhou contribute equally, code link: github.com/OpenNLPLab/FAVDBench, dataset link: www.avlbench.opennlplab.c

    The PIAS-like Coactivator Zmiz1 Is a Direct and Selective Cofactor of Notch1 in T Cell Development and Leukemia

    Get PDF
    SummaryPan-NOTCH inhibitors are poorly tolerated in clinical trials because NOTCH signals are crucial for intestinal homeostasis. These inhibitors might also promote cancer because NOTCH can act as a tumor suppressor. We previously reported that the PIAS-like coactivator ZMIZ1 is frequently co-expressed with activated NOTCH1 in T cell acute lymphoblastic leukemia (T-ALL). Here, we show that similar to Notch1, Zmiz1 was important for T cell development and controlled the expression of certain Notch target genes, such as Myc. However, unlike Notch, Zmiz1 had no major role in intestinal homeostasis or myeloid suppression. Deletion of Zmiz1 impaired the initiation and maintenance of Notch-induced T-ALL. Zmiz1 directly interacted with Notch1 via a tetratricopeptide repeat domain at a special class of Notch-regulatory sites. In contrast to the Notch cofactor Maml, which is nonselective, Zmiz1 was selective. Thus, targeting the NOTCH1-ZMIZ1 interaction might combat leukemic growth while avoiding the intolerable toxicities of NOTCH inhibitors
    corecore