173 research outputs found
Simulating Large Quantum Circuits on a Small Quantum Computer
Limited quantum memory is one of the most important constraints for near-term
quantum devices. Understanding whether a small quantum computer can simulate a
larger quantum system, or execute an algorithm requiring more qubits than
available, is both of theoretical and practical importance. In this Letter, we
introduce cluster parameters and of a quantum circuit. The tensor
network of such a circuit can be decomposed into clusters of size at most
with at most qubits of inter-cluster quantum communication. We propose a
cluster simulation scheme that can simulate any -clustered quantum
circuit on a -qubit machine in time roughly , with further
speedups possible when taking more fine-grained circuit structure into account.
We show how our scheme can be used to simulate clustered quantum systems --
such as large molecules -- that can be partitioned into multiple significantly
smaller clusters with weak interactions among them. By using a suitable
clustered ansatz, we also experimentally demonstrate that a quantum variational
eigensolver can still achieve the desired performance for estimating the energy
of the BeH molecule while running on a physical quantum device with half
the number of required qubits.Comment: Codes are available at https://github.com/TianyiPeng/Partiton_VQ
Contrastive Clustering
In this paper, we propose a one-stage online clustering method called
Contrastive Clustering (CC) which explicitly performs the instance- and
cluster-level contrastive learning. To be specific, for a given dataset, the
positive and negative instance pairs are constructed through data augmentations
and then projected into a feature space. Therein, the instance- and
cluster-level contrastive learning are respectively conducted in the row and
column space by maximizing the similarities of positive pairs while minimizing
those of negative ones. Our key observation is that the rows of the feature
matrix could be regarded as soft labels of instances, and accordingly the
columns could be further regarded as cluster representations. By simultaneously
optimizing the instance- and cluster-level contrastive loss, the model jointly
learns representations and cluster assignments in an end-to-end manner.
Extensive experimental results show that CC remarkably outperforms 17
competitive clustering methods on six challenging image benchmarks. In
particular, CC achieves an NMI of 0.705 (0.431) on the CIFAR-10 (CIFAR-100)
dataset, which is an up to 19\% (39\%) performance improvement compared with
the best baseline
Cross-modal Active Complementary Learning with Self-refining Correspondence
Recently, image-text matching has attracted more and more attention from
academia and industry, which is fundamental to understanding the latent
correspondence across visual and textual modalities. However, most existing
methods implicitly assume the training pairs are well-aligned while ignoring
the ubiquitous annotation noise, a.k.a noisy correspondence (NC), thereby
inevitably leading to a performance drop. Although some methods attempt to
address such noise, they still face two challenging problems: excessive
memorizing/overfitting and unreliable correction for NC, especially under high
noise. To address the two problems, we propose a generalized Cross-modal Robust
Complementary Learning framework (CRCL), which benefits from a novel Active
Complementary Loss (ACL) and an efficient Self-refining Correspondence
Correction (SCC) to improve the robustness of existing methods. Specifically,
ACL exploits active and complementary learning losses to reduce the risk of
providing erroneous supervision, leading to theoretically and experimentally
demonstrated robustness against NC. SCC utilizes multiple self-refining
processes with momentum correction to enlarge the receptive field for
correcting correspondences, thereby alleviating error accumulation and
achieving accurate and stable corrections. We carry out extensive experiments
on three image-text benchmarks, i.e., Flickr30K, MS-COCO, and CC152K, to verify
the superior robustness of our CRCL against synthetic and real-world noisy
correspondences.Comment: This paper is accepted by NeurIPS 202
A quantum-inspired tensor network method for constrained combinatorial optimization problems
Combinatorial optimization is of general interest for both theoretical study
and real-world applications. Fast-developing quantum algorithms provide a
different perspective on solving combinatorial optimization problems. In this
paper, we propose a quantum inspired algorithm for general locally constrained
combinatorial optimization problems by encoding the constraints directly into a
tensor network state. The optimal solution can be efficiently solved by
borrowing the imaginary time evolution from a quantum many-body system. We
demonstrate our algorithm with the open-pit mining problem numerically. Our
computational results show the effectiveness of this construction and potential
applications in further studies for general combinatorial optimization
problems
Near-Optimal Entrywise Anomaly Detection for Low-Rank Matrices with Sub-Exponential Noise
We study the problem of identifying anomalies in a low-rank matrix observed
with sub-exponential noise, motivated by applications in retail and inventory
management. State of the art approaches to anomaly detection in low-rank
matrices apparently fall short, since they require that non-anomalous entries
be observed with vanishingly small noise (which is not the case in our problem,
and indeed in many applications). So motivated, we propose a conceptually
simple entrywise approach to anomaly detection in low-rank matrices. Our
approach accommodates a general class of probabilistic anomaly models. We
extend recent work on entrywise error guarantees for matrix completion,
establishing such guarantees for sub-exponential matrices, where in addition to
missing entries, a fraction of entries are corrupted by (an also unknown)
anomaly model. Viewing the anomaly detection as a classification task, to the
best of our knowledge, we are the first to achieve the min-max optimal
detection rate (up to log factors). Using data from a massive consumer goods
retailer, we show that our approach provides significant improvements over
incumbent approaches to anomaly detection
Adaptive Meta-learner via Gradient Similarity for Few-shot Text Classification
Few-shot text classification aims to classify the text under the few-shot
scenario. Most of the previous methods adopt optimization-based meta learning
to obtain task distribution. However, due to the neglect of matching between
the few amount of samples and complicated models, as well as the distinction
between useful and useless task features, these methods suffer from the
overfitting issue. To address this issue, we propose a novel Adaptive
Meta-learner via Gradient Similarity (AMGS) method to improve the model
generalization ability to a new task. Specifically, the proposed AMGS
alleviates the overfitting based on two aspects: (i) acquiring the potential
semantic representation of samples and improving model generalization through
the self-supervised auxiliary task in the inner loop, (ii) leveraging the
adaptive meta-learner via gradient similarity to add constraints on the
gradient obtained by base-learner in the outer loop. Moreover, we make a
systematic analysis of the influence of regularization on the entire framework.
Experimental results on several benchmarks demonstrate that the proposed AMGS
consistently improves few-shot text classification performance compared with
the state-of-the-art optimization-based meta-learning approaches.Comment: COLING 202
- …