3 research outputs found
Residual Matrix Product State for Machine Learning
Tensor network, which originates from quantum physics, is emerging as an
efficient tool for classical and quantum machine learning. Nevertheless, there
still exists a considerable accuracy gap between tensor network and the
sophisticated neural network models for classical machine learning. In this
work, we combine the ideas of matrix product state (MPS), the simplest tensor
network structure, and residual neural network and propose the residual matrix
product state (ResMPS). The ResMPS can be treated as a network where its layers
map the "hidden" features to the outputs (e.g., classifications), and the
variational parameters of the layers are the functions of the features of the
samples (e.g., pixels of images). This is different from neural network, where
the layers map feed-forwardly the features to the output. The ResMPS can equip
with the non-linear activations and dropout layers, and outperforms the
state-of-the-art tensor network models in terms of efficiency, stability, and
expression power. Besides, ResMPS is interpretable from the perspective of
polynomial expansion, where the factorization and exponential machines
naturally emerge. Our work contributes to connecting and hybridizing neural and
tensor networks, which is crucial to further enhance our understand of the
working mechanisms and improve the performance of both models
Tensor networks for quantum machine learning
Once developed for quantum theory, tensor networks have been established as a
successful machine learning paradigm. Now, they have been ported back to the
quantum realm in the emerging field of quantum machine learning to assess
problems that classical computers are unable to solve efficiently. Their nature
at the interface between physics and machine learning makes tensor networks
easily deployable on quantum computers. In this review article, we shed light
on one of the major architectures considered to be predestined for variational
quantum machine learning. In particular, we discuss how layouts like MPS, PEPS,
TTNs and MERA can be mapped to a quantum computer, how they can be used for
machine learning and data encoding and which implementation techniques improve
their performance