174 research outputs found

    Sparse Complementary Pairs with Additional Aperiodic ZCZ Property

    Full text link
    This paper presents a novel class of complex-valued sparse complementary pairs (SCPs), each consisting of a number of zero values and with additional zero-correlation zone (ZCZ) property for the aperiodic autocorrelations and crosscorrelations of the two constituent sequences. Direct constructions of SCPs and their mutually-orthogonal mates based on restricted generalized Boolean functions are proposed. It is shown that such SCPs exist with arbitrary lengths and controllable sparsity levels, making them a disruptive sequence candidate for modern low-complexity, low-latency, and low-storage signal processing applications

    OV-VG: A Benchmark for Open-Vocabulary Visual Grounding

    Full text link
    Open-vocabulary learning has emerged as a cutting-edge research area, particularly in light of the widespread adoption of vision-based foundational models. Its primary objective is to comprehend novel concepts that are not encompassed within a predefined vocabulary. One key facet of this endeavor is Visual Grounding, which entails locating a specific region within an image based on a corresponding language description. While current foundational models excel at various visual language tasks, there's a noticeable absence of models specifically tailored for open-vocabulary visual grounding. This research endeavor introduces novel and challenging OV tasks, namely Open-Vocabulary Visual Grounding and Open-Vocabulary Phrase Localization. The overarching aim is to establish connections between language descriptions and the localization of novel objects. To facilitate this, we have curated a comprehensive annotated benchmark, encompassing 7,272 OV-VG images and 1,000 OV-PL images. In our pursuit of addressing these challenges, we delved into various baseline methodologies rooted in existing open-vocabulary object detection, VG, and phrase localization frameworks. Surprisingly, we discovered that state-of-the-art methods often falter in diverse scenarios. Consequently, we developed a novel framework that integrates two critical components: Text-Image Query Selection and Language-Guided Feature Attention. These modules are designed to bolster the recognition of novel categories and enhance the alignment between visual and linguistic information. Extensive experiments demonstrate the efficacy of our proposed framework, which consistently attains SOTA performance across the OV-VG task. Additionally, ablation studies provide further evidence of the effectiveness of our innovative models. Codes and datasets will be made publicly available at https://github.com/cv516Buaa/OV-VG

    Iterative Robust Visual Grounding with Masked Reference based Centerpoint Supervision

    Full text link
    Visual Grounding (VG) aims at localizing target objects from an image based on given expressions and has made significant progress with the development of detection and vision transformer. However, existing VG methods tend to generate false-alarm objects when presented with inaccurate or irrelevant descriptions, which commonly occur in practical applications. Moreover, existing methods fail to capture fine-grained features, accurate localization, and sufficient context comprehension from the whole image and textual descriptions. To address both issues, we propose an Iterative Robust Visual Grounding (IR-VG) framework with Masked Reference based Centerpoint Supervision (MRCS). The framework introduces iterative multi-level vision-language fusion (IMVF) for better alignment. We use MRCS to ahieve more accurate localization with point-wised feature supervision. Then, to improve the robustness of VG, we also present a multi-stage false-alarm sensitive decoder (MFSD) to prevent the generation of false-alarm objects when presented with inaccurate expressions. The proposed framework is evaluated on five regular VG datasets and two newly constructed robust VG datasets. Extensive experiments demonstrate that IR-VG achieves new state-of-the-art (SOTA) results, with improvements of 25\% and 10\% compared to existing SOTA approaches on the two newly proposed robust VG datasets. Moreover, the proposed framework is also verified effective on five regular VG datasets. Codes and models will be publicly at https://github.com/cv516Buaa/IR-VG

    Enhancing traditional Chinese medicine diagnostics: Integrating ontological knowledge for multi-label symptom entity classification

    Get PDF
    In traditional Chinese medicine (TCM), artificial intelligence (AI)-assisted syndrome differentiation and disease diagnoses primarily confront the challenges of accurate symptom identification and classification. This study introduces a multi-label entity extraction model grounded in TCM symptom ontology, specifically designed to address the limitations of existing entity recognition models characterized by limited label spaces and an insufficient integration of domain knowledge. This model synergizes a knowledge graph with the TCM symptom ontology framework to facilitate a standardized symptom classification system and enrich it with domain-specific knowledge. It innovatively merges the conventional bidirectional encoder representations from transformers (BERT) + bidirectional long short-term memory (Bi-LSTM) + conditional random fields (CRF) entity recognition methodology with a multi-label classification strategy, thereby adeptly navigating the intricate label interdependencies in the textual data. Introducing a multi-associative feature fusion module is a significant advancement, thereby enabling the extraction of pivotal entity features while discerning the interrelations among diverse categorical labels. The experimental outcomes affirm the model's superior performance in multi-label symptom extraction and substantially elevates the efficiency and accuracy. This advancement robustly underpins research in TCM syndrome differentiation and disease diagnoses

    Chiral Antioxidant-based Gold Nanoclusters Reprogram DNA Epigenetic Patterns

    Get PDF
    Epigenetic modifications sit ‘on top of’ the genome and influence DNA transcription, which can force a significant impact on cellular behavior and phenotype and, consequently human development and disease. Conventional methods for evaluating epigenetic modifications have inherent limitations and, hence, new methods based on nanoscale devices are needed. Here, we found that antioxidant (glutathione) chiral gold nanoclusters induce a decrease of 5-hydroxymethylcytosine (5hmC), which is an important epigenetic marker that associates with gene transcription regulation. This epigenetic change was triggered partially through ROS activation and oxidation generated by the treatment with glutathione chiral gold nanoclusters, which may inhibit the activity of TET proteins catalyzing the conversion of 5-methylcytosine (5mC) to 5hmC. In addition, these chiral gold nanoclusters can downregulate TET1 and TET2 mRNA expression. Alteration of TET-5hmC signaling will then affect several downstream targets and be involved in many aspects of cell behavior. We demonstrate for the first time that antioxidant-based chiral gold nanomaterials have a direct effect on epigenetic process of TET-5hmC pathways and reveal critical DNA demethylation patterns
    • …
    corecore