163 research outputs found
Efficient Online Quantum Generative Adversarial Learning Algorithms with Applications
The exploration of quantum algorithms that possess quantum advantages is a
central topic in quantum computation and quantum information processing. One
potential candidate in this area is quantum generative adversarial learning
(QuGAL), which conceptually has exponential advantages over classical
adversarial networks. However, the corresponding learning algorithm remains
obscured. In this paper, we propose the first quantum generative adversarial
learning algorithm-- the quantum multiplicative matrix weight algorithm
(QMMW)-- which enables the efficient processing of fundamental tasks. The
computational complexity of QMMW is polynomially proportional to the number of
training rounds and logarithmically proportional to the input size. The core
concept of the proposed algorithm combines QuGAL with online learning. We
exploit the implementation of QuGAL with parameterized quantum circuits, and
numerical experiments for the task of entanglement test for pure state are
provided to support our claims
Implementable Quantum Classifier for Nonlinear Data
In this Letter, we propose a quantum machine learning scheme for the
classification of classical nonlinear data. The main ingredients of our method
are variational quantum perceptron (VQP) and a quantum generalization of
classical ensemble learning. Our VQP employs parameterized quantum circuits to
learn a Grover search (or amplitude amplification) operation with classical
optimization, and can achieve quadratic speedup in query complexity compared to
its classical counterparts. We show how the trained VQP can be used to predict
future data with {query} complexity. Ultimately, a stronger nonlinear
classifier can be established, the so-called quantum ensemble learning (QEL),
by combining a set of weak VQPs produced using a subsampling method. The
subsampling method has two significant advantages. First, all weak VQPs
employed in QEL can be trained in parallel, therefore, the query complexity of
QEL is equal to that of each weak VQP multiplied by . Second, it
dramatically reduce the {runtime} complexity of encoding circuits that map
classical data to a quantum state because this dataset can be significantly
smaller than the original dataset given to QEL. This arguably provides a most
satisfactory solution to one of the most criticized issues in quantum machine
learning proposals. To conclude, we perform two numerical experiments for our
VQP and QEL, implemented by Python and pyQuil library. Our experiments show
that excellent performance can be achieved using a very small quantum circuit
size that is implementable under current quantum hardware development.
Specifically, given a nonlinear synthetic dataset with features for each
example, the trained QEL can classify the test examples that are sampled away
from the decision boundaries using single and two qubits quantum gates
with accuracy.Comment: 9 page
The BUCEA Speaker Diarization System for the VoxCeleb Speaker Recognition Challenge 2022
This paper describes the BUCEA speaker diarization system for the 2022
VoxCeleb Speaker Recognition Challenge. Voxsrc-22 provides the development set
and test set of VoxConverse, and we mainly use the test set of VoxConverse for
parameter adjustment. Our system consists of several modules, including speech
activity detection (VAD), speaker embedding extractor, clustering methods,
overlapping speech detection (OSD), and result fusion. Without considering
overlap, the Dover-LAP (short for Diarization Output Voting Error Reduction)
method was applied to system fusion, and overlapping speech detection and
processing were finally carried out. Our best system achieves a diarization
error rate (DER) of 5.48% and a Jaccard error rate (JER) of 32.1% on the VoxSRC
2022 evaluation set respectively
Diffusione e influenza dello Xiru ermu zi di Nicolas Trigault S.J. durante le dinastie Ming e Qing
lo Xiru ermu zi 西儒耳目資 (XREMZ) fu un'opera completata dal gesuita Nicolas Trigault (in cinese: Jin Nige 金尼閣, 1577-1628) in lingua cinese e pubblicata nella provincia dello Shaanxi 陝西 in Cina nel 1626. L'opera era costituita da una prima parte teorica per esprimere il sistema di romanizzazione: Yi yin shou pu 譯引首譜. Seguivano poi due volumi per assolvere due specifiche funzioni: cercare un carattere cinese conoscendo la pronuncia, tramite il volume Lie yinyun pu 列音韻譜; stabilire la pronuncia a partire dal carattere cinese, tramite il volume Lie bianzheng pu 列邊正譜. L'autore presentò in dettaglio le teorie dell'uso dei caratteri occidentali per trascrivere i suoni dei caratteri cinesi. Dopo la pubblicazione, l'opera venne letta e commentata da molti letterati cinesi. Scopo di questo lavoro è indagare la diffusione dello XREMZ in Cina durante le dinastie Ming e Qing. In secondo luogo si è cercato a chiarire la possibile influenza di quest'opera sul pensiero linguistico dei letterati cinesi nell'epoca.
L'elaborato è suddiviso in quattro parti.
La prima parte fornisce le caratteristiche del contesto storico prima e durante il periodo in cui lo XREMZ venne ideato.
La seconda parte illustra l'opera XREMZ. Vengono presentati in questo capitolo la genesi dello XREMZ e i collaboratori cinesi dell'opera. Dopo una presentazione di tre volumi dell'opera, vengono presentati ed analizzati i principali termini stabiliti da Trigault nello XREMZ.
La terza parte presenta la possibile diffusione e influenza dello XREMZ in Cina durante le dinastie Ming e Qing. Vengono trattati innanzitutto in questo capitolo delle copie sopravvissute dello XREMZ e dei loro proprietari. Vengono presentate in seguito le opinioni dei letterati cinesi attivi durante le dinastie Ming e Qing. Si è cercato di individuare, per quanto possibile, tutti i letterati che commentarono o scrissero sull'opera di Trigault al fine di avere un quadro dettagliato della sua influenza tra le classi colte. Vengono presentate brevemente, alla fine del capitolo, le valutazioni dello XREMZ dei linguisti moderni.
Nella quarta parte vengono analizzati i dati nella parte precedente, cercando di determinare la provenienza geografica di tali letterati e analizzando la diffusione dell'opera dal punto di vista geografico; dopo una sintesi, si è cercato di inquadrare a che tipo di pubblico si rivolgeva la nostra opera. Infine, si è rilevato quali settori delle teorie fonetiche e fonologiche presentate nello XREMZ siano state maggiormente analizzate dai letterati cinesi
Coreset selection can accelerate quantum machine learning models with provable generalization
Quantum neural networks (QNNs) and quantum kernels stand as prominent figures
in the realm of quantum machine learning, poised to leverage the nascent
capabilities of near-term quantum computers to surmount classical machine
learning challenges. Nonetheless, the training efficiency challenge poses a
limitation on both QNNs and quantum kernels, curbing their efficacy when
applied to extensive datasets. To confront this concern, we present a unified
approach: coreset selection, aimed at expediting the training of QNNs and
quantum kernels by distilling a judicious subset from the original training
dataset. Furthermore, we analyze the generalization error bounds of QNNs and
quantum kernels when trained on such coresets, unveiling the comparable
performance with those training on the complete original dataset. Through
systematic numerical simulations, we illuminate the potential of coreset
selection in expediting tasks encompassing synthetic data classification,
identification of quantum correlations, and quantum compiling. Our work offers
a useful way to improve diverse quantum machine learning models with a
theoretical guarantee while reducing the training cost.Comment: 25 pages, 7 figure
Problem-Dependent Power of Quantum Neural Networks on Multi-Class Classification
Quantum neural networks (QNNs) have become an important tool for
understanding the physical world, but their advantages and limitations are not
fully understood. Some QNNs with specific encoding methods can be efficiently
simulated by classical surrogates, while others with quantum memory may perform
better than classical classifiers. Here we systematically investigate the
problem-dependent power of quantum neural classifiers (QCs) on multi-class
classification tasks. Through the analysis of expected risk, a measure that
weighs the training loss and the generalization error of a classifier jointly,
we identify two key findings: first, the training loss dominates the power
rather than the generalization ability; second, QCs undergo a U-shaped risk
curve, in contrast to the double-descent risk curve of deep neural classifiers.
We also reveal the intrinsic connection between optimal QCs and the Helstrom
bound and the equiangular tight frame. Using these findings, we propose a
method that uses loss dynamics to probe whether a QC may be more effective than
a classical classifier on a particular learning task. Numerical results
demonstrate the effectiveness of our approach to explain the superiority of QCs
over multilayer Perceptron on parity datasets and their limitations over
convolutional neural networks on image datasets. Our work sheds light on the
problem-dependent power of QNNs and offers a practical tool for evaluating
their potential merit.Comment: Updated version. Published on PR
- …