1,554 research outputs found
Implementing universal nonadiabatic holonomic quantum gates with transmons
Geometric phases are well known to be noise-resilient in quantum
evolutions/operations. Holonomic quantum gates provide us with a robust way
towards universal quantum computation, as these quantum gates are actually
induced by nonabelian geometric phases. Here we propose and elaborate how to
efficiently implement universal nonadiabatic holonomic quantum gates on simpler
superconducting circuits, with a single transmon serving as a qubit. In our
proposal, an arbitrary single-qubit holonomic gate can be realized in a
single-loop scenario, by varying the amplitudes and phase difference of two
microwave fields resonantly coupled to a transmon, while nontrivial two-qubit
holonomic gates may be generated with a transmission-line resonator being
simultaneously coupled to the two target transmons in an effective resonant
way. Moreover, our scenario may readily be scaled up to a two-dimensional
lattice configuration, which is able to support large scalable quantum
computation, paving the way for practically implementing universal nonadiabatic
holonomic quantum computation with superconducting circuits.Comment: v3 Appendix added, v4 published version, v5 published version with
correction
Learn to Unlearn: A Survey on Machine Unlearning
Machine Learning (ML) models have been shown to potentially leak sensitive
information, thus raising privacy concerns in ML-driven applications. This
inspired recent research on removing the influence of specific data samples
from a trained ML model. Such efficient removal would enable ML to comply with
the "right to be forgotten" in many legislation, and could also address
performance bottlenecks from low-quality or poisonous samples. In that context,
machine unlearning methods have been proposed to erase the contributions of
designated data samples on models, as an alternative to the often impracticable
approach of retraining models from scratch. This article presents a
comprehensive review of recent machine unlearning techniques, verification
mechanisms, and potential attacks. We further highlight emerging challenges and
prospective research directions (e.g. resilience and fairness concerns). We aim
for this paper to provide valuable resources for integrating privacy, equity,
andresilience into ML systems and help them "learn to unlearn".Comment: 10 pages, 5 figures, 1 tabl
Hierarchical Integration Diffusion Model for Realistic Image Deblurring
Diffusion models (DMs) have recently been introduced in image deblurring and
exhibited promising performance, particularly in terms of details
reconstruction. However, the diffusion model requires a large number of
inference iterations to recover the clean image from pure Gaussian noise, which
consumes massive computational resources. Moreover, the distribution
synthesized by the diffusion model is often misaligned with the target results,
leading to restrictions in distortion-based metrics. To address the above
issues, we propose the Hierarchical Integration Diffusion Model (HI-Diff), for
realistic image deblurring. Specifically, we perform the DM in a highly
compacted latent space to generate the prior feature for the deblurring
process. The deblurring process is implemented by a regression-based method to
obtain better distortion accuracy. Meanwhile, the highly compact latent space
ensures the efficiency of the DM. Furthermore, we design the hierarchical
integration module to fuse the prior into the regression-based model from
multiple scales, enabling better generalization in complex blurry scenarios.
Comprehensive experiments on synthetic and real-world blur datasets demonstrate
that our HI-Diff outperforms state-of-the-art methods. Code and trained models
are available at https://github.com/zhengchen1999/HI-Diff.Comment: Code is available at https://github.com/zhengchen1999/HI-Dif
Text-to-3D with Classifier Score Distillation
Text-to-3D generation has made remarkable progress recently, particularly
with methods based on Score Distillation Sampling (SDS) that leverages
pre-trained 2D diffusion models. While the usage of classifier-free guidance is
well acknowledged to be crucial for successful optimization, it is considered
an auxiliary trick rather than the most essential component. In this paper, we
re-evaluate the role of classifier-free guidance in score distillation and
discover a surprising finding: the guidance alone is enough for effective
text-to-3D generation tasks. We name this method Classifier Score Distillation
(CSD), which can be interpreted as using an implicit classification model for
generation. This new perspective reveals new insights for understanding
existing techniques. We validate the effectiveness of CSD across a variety of
text-to-3D tasks including shape generation, texture synthesis, and shape
editing, achieving results superior to those of state-of-the-art methods. Our
project page is https://xinyu-andy.github.io/Classifier-Score-DistillationComment: Our project page is
https://xinyu-andy.github.io/Classifier-Score-Distillatio
- …