181 research outputs found

    Quantum circuit complexity of one-dimensional topological phases

    Get PDF
    Topological quantum states cannot be created from product states with local quantum circuits of constant depth and are in this sense more entangled than topologically trivial states, but how entangled are they? Here we quantify the entanglement in one-dimensional topological states by showing that local quantum circuits of linear depth are necessary to generate them from product states. We establish this linear lower bound for both bosonic and fermionic one-dimensional topological phases and use symmetric circuits for phases with symmetry. We also show that the linear lower bound can be saturated by explicitly constructing circuits generating these topological states. The same results hold for local quantum circuits connecting topological states in different phases.Comment: published versio

    Out-of-time-ordered correlators in many-body localized systems

    Get PDF
    In many-body localized systems, propagation of information forms a light cone that grows logarithmically with time. However, local changes in energy or other conserved quantities typically spread only within a finite distance. Is it possible to detect the logarithmic light cone generated by a local perturbation from the response of a local operator at a later time? We numerically calculate various correlators in the random-field Heisenberg chain. While the equilibrium retarded correlator A(t = 0)B(t > 0) is not sensitive to the unbounded information propagation, the out-of-time-ordered correlator A(t = 0)B(t > 0)A(t = 0)B(t > 0) can detect the logarithmic light cone. We relate out-of-time-ordered correlators to the Lieb-Robinson bound in many-body localized systems, and show how to detect the logarithmic light cone with retarded correlators in specially designed states. Furthermore, we study the temperature dependence of the logarithmic light cone using out-of-time-ordered correlators

    Towards Free Data Selection with General-Purpose Models

    Full text link
    A desirable data selection algorithm can efficiently choose the most informative samples to maximize the utility of limited annotation budgets. However, current approaches, represented by active learning methods, typically follow a cumbersome pipeline that iterates the time-consuming model training and batch data selection repeatedly. In this paper, we challenge this status quo by designing a distinct data selection pipeline that utilizes existing general-purpose models to select data from various datasets with a single-pass inference without the need for additional training or supervision. A novel free data selection (FreeSel) method is proposed following this new pipeline. Specifically, we define semantic patterns extracted from inter-mediate features of the general-purpose model to capture subtle local information in each image. We then enable the selection of all data samples in a single pass through distance-based sampling at the fine-grained semantic pattern level. FreeSel bypasses the heavy batch selection process, achieving a significant improvement in efficiency and being 530x faster than existing active learning methods. Extensive experiments verify the effectiveness of FreeSel on various computer vision tasks. Our code is available at https://github.com/yichen928/FreeSel.Comment: accepted by NeurIPS 202

    DIRV: Dense Interaction Region Voting for End-to-End Human-Object Interaction Detection

    Full text link
    Recent years, human-object interaction (HOI) detection has achieved impressive advances. However, conventional two-stage methods are usually slow in inference. On the other hand, existing one-stage methods mainly focus on the union regions of interactions, which introduce unnecessary visual information as disturbances to HOI detection. To tackle the problems above, we propose a novel one-stage HOI detection approach DIRV in this paper, based on a new concept called interaction region for the HOI problem. Unlike previous methods, our approach concentrates on the densely sampled interaction regions across different scales for each human-object pair, so as to capture the subtle visual features that is most essential to the interaction. Moreover, in order to compensate for the detection flaws of a single interaction region, we introduce a novel voting strategy that makes full use of those overlapped interaction regions in place of conventional Non-Maximal Suppression (NMS). Extensive experiments on two popular benchmarks: V-COCO and HICO-DET show that our approach outperforms existing state-of-the-arts by a large margin with the highest inference speed and lightest network architecture. We achieved 56.1 mAP on V-COCO without addtional input. Our code is publicly available at: https://github.com/MVIG-SJTU/DIRVComment: Paper is accepted. Code available at: https://github.com/MVIG-SJTU/DIR

    Learning to Purify Noisy Labels via Meta Soft Label Corrector

    Full text link
    Recent deep neural networks (DNNs) can easily overfit to biased training data with noisy labels. Label correction strategy is commonly used to alleviate this issue by designing a method to identity suspected noisy labels and then correct them. Current approaches to correcting corrupted labels usually need certain pre-defined label correction rules or manually preset hyper-parameters. These fixed settings make it hard to apply in practice since the accurate label correction usually related with the concrete problem, training data and the temporal information hidden in dynamic iterations of training process. To address this issue, we propose a meta-learning model which could estimate soft labels through meta-gradient descent step under the guidance of noise-free meta data. By viewing the label correction procedure as a meta-process and using a meta-learner to automatically correct labels, we could adaptively obtain rectified soft labels iteratively according to current training problems without manually preset hyper-parameters. Besides, our method is model-agnostic and we can combine it with any other existing model with ease. Comprehensive experiments substantiate the superiority of our method in both synthetic and real-world problems with noisy labels compared with current SOTA label correction strategies.Comment: 12 pages,6 figure

    Exploring and Exploiting Uncertainty for Incomplete Multi-View Classification

    Full text link
    Classifying incomplete multi-view data is inevitable since arbitrary view missing widely exists in real-world applications. Although great progress has been achieved, existing incomplete multi-view methods are still difficult to obtain a trustworthy prediction due to the relatively high uncertainty nature of missing views. First, the missing view is of high uncertainty, and thus it is not reasonable to provide a single deterministic imputation. Second, the quality of the imputed data itself is of high uncertainty. To explore and exploit the uncertainty, we propose an Uncertainty-induced Incomplete Multi-View Data Classification (UIMC) model to classify the incomplete multi-view data under a stable and reliable framework. We construct a distribution and sample multiple times to characterize the uncertainty of missing views, and adaptively utilize them according to the sampling quality. Accordingly, the proposed method realizes more perceivable imputation and controllable fusion. Specifically, we model each missing data with a distribution conditioning on the available views and thus introducing uncertainty. Then an evidence-based fusion strategy is employed to guarantee the trustworthy integration of the imputed views. Extensive experiments are conducted on multiple benchmark data sets and our method establishes a state-of-the-art performance in terms of both performance and trustworthiness.Comment: CVP
    • …
    corecore