136,013 research outputs found
Continuous-variable quantum neural networks
We introduce a general method for building neural networks on quantum
computers. The quantum neural network is a variational quantum circuit built in
the continuous-variable (CV) architecture, which encodes quantum information in
continuous degrees of freedom such as the amplitudes of the electromagnetic
field. This circuit contains a layered structure of continuously parameterized
gates which is universal for CV quantum computation. Affine transformations and
nonlinear activation functions, two key elements in neural networks, are
enacted in the quantum network using Gaussian and non-Gaussian gates,
respectively. The non-Gaussian gates provide both the nonlinearity and the
universality of the model. Due to the structure of the CV model, the CV quantum
neural network can encode highly nonlinear transformations while remaining
completely unitary. We show how a classical network can be embedded into the
quantum formalism and propose quantum versions of various specialized model
such as convolutional, recurrent, and residual networks. Finally, we present
numerous modeling experiments built with the Strawberry Fields software
library. These experiments, including a classifier for fraud detection, a
network which generates Tetris images, and a hybrid classical-quantum
autoencoder, demonstrate the capability and adaptability of CV quantum neural
networks
A Scalable Training Strategy for Blind Multi-Distribution Noise Removal
Despite recent advances, developing general-purpose universal denoising and
artifact-removal networks remains largely an open problem: Given fixed network
weights, one inherently trades-off specialization at one task (e.g.,~removing
Poisson noise) for performance at another (e.g.,~removing speckle noise). In
addition, training such a network is challenging due to the curse of
dimensionality: As one increases the dimensions of the specification-space
(i.e.,~the number of parameters needed to describe the noise distribution) the
number of unique specifications one needs to train for grows exponentially.
Uniformly sampling this space will result in a network that does well at very
challenging problem specifications but poorly at easy problem specifications,
where even large errors will have a small effect on the overall mean squared
error.
In this work we propose training denoising networks using an
adaptive-sampling/active-learning strategy. Our work improves upon a recently
proposed universal denoiser training strategy by extending these results to
higher dimensions and by incorporating a polynomial approximation of the true
specification-loss landscape. This approximation allows us to reduce training
times by almost two orders of magnitude. We test our method on simulated joint
Poisson-Gaussian-Speckle noise and demonstrate that with our proposed training
strategy, a single blind, generalist denoiser network can achieve peak
signal-to-noise ratios within a uniform bound of specialized denoiser networks
across a large range of operating conditions. We also capture a small dataset
of images with varying amounts of joint Poisson-Gaussian-Speckle noise and
demonstrate that a universal denoiser trained using our adaptive-sampling
strategy outperforms uniformly trained baselines
Morphological Network: How Far Can We Go with Morphological Neurons?
In recent years, the idea of using morphological operations as networks has
received much attention. Mathematical morphology provides very efficient and
useful image processing and image analysis tools based on basic operators like
dilation and erosion, defined in terms of kernels. Many other morphological
operations are built up using the dilation and erosion operations. Although the
learning of structuring elements such as dilation or erosion using the
backpropagation algorithm is not new, the order and the way these morphological
operations are used is not standard. In this paper, we have theoretically
analyzed the use of morphological operations for processing 1D feature vectors
and shown that this gets extended to the 2D case in a simple manner. Our
theoretical results show that a morphological block represents a sum of hinge
functions. Hinge functions are used in many places for classification and
regression tasks (Breiman (1993)). We have also proved a universal
approximation theorem -- a stack of two morphological blocks can approximate
any continuous function over arbitrary compact sets. To experimentally validate
the efficacy of this network in real-life applications, we have evaluated its
performance on satellite image classification datasets since morphological
operations are very sensitive to geometrical shapes and structures. We have
also shown results on a few tasks like segmentation of blood vessels from
fundus images, segmentation of lungs from chest x-ray and image dehazing. The
results are encouraging and further establishes the potential of morphological
networks.Comment: 35 pages, 19 figures, 7 table
- …