23 research outputs found
Improving the speed of variational quantum algorithms for quantum error correction
We consider the problem of devising suitable quantum error correction (QEC) procedures for a generic quantum noise acting on a quantum circuit. In general, there is no analytic universal procedure to obtain the encoding and correction unitary gates, and the problem is even harder if the noise is unknown and has to be reconstructed. The existing procedures rely on variational quantum algorithms (VQAs) and are very difficult to train since the size of the gradient of the cost function decays exponentially with the number of qubits. We address this problem using a cost function based on the quantum Wasserstein distance of order 1 (QW1). At variance with other quantum distances typically adopted in quantum information processing, QW1 lacks the unitary invariance property which makes it a suitable tool to avoid getting trapped in local minima. Focusing on a simple noise model for which an exact QEC solution is known and can be used as a theoretical benchmark, we run a series of numerical tests that show how, guiding the VQA search through the QW1, can indeed significantly increase both the probability of a successful training and the fidelity of the recovered state, with respect to the results one obtains when using conventional approaches
SpECTRE: A Task-based Discontinuous Galerkin Code for Relativistic Astrophysics
We introduce a new relativistic astrophysics code, SpECTRE, that combines a
discontinuous Galerkin method with a task-based parallelism model. SpECTRE's
goal is to achieve more accurate solutions for challenging relativistic
astrophysics problems such as core-collapse supernovae and binary neutron star
mergers. The robustness of the discontinuous Galerkin method allows for the use
of high-resolution shock capturing methods in regions where (relativistic)
shocks are found, while exploiting high-order accuracy in smooth regions. A
task-based parallelism model allows efficient use of the largest supercomputers
for problems with a heterogeneous workload over disparate spatial and temporal
scales. We argue that the locality and algorithmic structure of discontinuous
Galerkin methods will exhibit good scalability within a task-based parallelism
framework. We demonstrate the code on a wide variety of challenging benchmark
problems in (non)-relativistic (magneto)-hydrodynamics. We demonstrate the
code's scalability including its strong scaling on the NCSA Blue Waters
supercomputer up to the machine's full capacity of 22,380 nodes using 671,400
threads.Comment: 41 pages, 13 figures, and 7 tables. Ancillary data contains
simulation input file
Improving the speed of variational quantum algorithms for quantum error correction
We consider the problem of devising a suitable Quantum Error Correction (QEC)
procedures for a generic quantum noise acting on a quantum circuit. In general,
there is no analytic universal procedure to obtain the encoding and correction
unitary gates, and the problem is even harder if the noise is unknown and has
to be reconstructed. The existing procedures rely on Variational Quantum
Algorithms (VQAs) and are very difficult to train since the size of the
gradient of the cost function decays exponentially with the number of qubits.
We address this problem using a cost function based on the Quantum Wasserstein
distance of order 1 (). At variance with other quantum distances
typically adopted in quantum information processing, lacks the unitary
invariance property which makes it a suitable tool to avoid to get trapped in
local minima. Focusing on a simple noise model for which an exact QEC solution
is known and can be used as a theoretical benchmark, we run a series of
numerical tests that show how, guiding the VQA search through the , can
indeed significantly increase both the probability of a successful training and
the fidelity of the recovered state, with respect to the results one obtains
when using conventional approaches
hp-adaptive discontinuous Galerkin solver for elliptic equations in numerical relativity
A considerable amount of attention has been given to discontinuous Galerkin methods for hyperbolic problems in numerical relativity, showing potential advantages of the methods in dealing with hydrodynamical shocks and other discontinuities. This paper investigates discontinuous Galerkin methods for the solution of elliptic problems in numerical relativity. We present a novel hp-adaptive numerical scheme for curvilinear and non-conforming meshes. It uses a multigrid preconditioner with a Chebyshev or Schwarz smoother to create a very scalable discontinuous Galerkin code on generic domains. The code employs compactification to move the outer boundary near spatial infinity. We explore the properties of the code on some test problems, including one mimicking Neutron stars with phase transitions. We also apply it to construct initial data for two or three black holes
Constrained interpolation profile conservative semi-Lagrangian scheme based on third-order polynomial functions and essentially non-oscillatory (CIP-CSL3ENO) scheme
We propose a fully conservative and less oscillatory multi-moment scheme for the approximation of hyperbolic conservation laws. The proposed scheme (CIP-CSL3ENO) is based on two CIP-CSL3 schemes and the essentially non-oscillatory (ENO) scheme. In this paper, we also propose an ENO indicator for the multi-moment framework, which intentionally selects non-smooth stencil but can efficiently minimize numerical oscillations. The proposed scheme is validated through various benchmark problems and a comparison with an experiment of two droplets collision/separation. The CIP-CSL3ENO scheme shows approximately fourth-order accuracy for smooth solution, and captures discontinuities and smooth solutions simultaneously without numerical oscillations for solutions which include discontinuities. The numerical results of two droplets collision/separation (3D free surface flow simulation) show that the CIP-CSL3ENO scheme can be applied to various types of fluid problems not only compressible flow problems but also incompressible and 3D free surface flow problems
Convergent Data-driven Regularizations for CT Reconstruction
The reconstruction of images from their corresponding noisy Radon transform
is a typical example of an ill-posed linear inverse problem as arising in the
application of computerized tomography (CT). As the (naive) solution does not
depend on the measured data continuously, regularization is needed to
re-establish a continuous dependence. In this work, we investigate simple, but
yet still provably convergent approaches to learning linear regularization
methods from data. More specifically, we analyze two approaches: One generic
linear regularization that learns how to manipulate the singular values of the
linear operator in an extension of our previous work, and one tailored approach
in the Fourier domain that is specific to CT-reconstruction. We prove that such
approaches become convergent regularization methods as well as the fact that
the reconstructions they provide are typically much smoother than the training
data they were trained on. Finally, we compare the spectral as well as the
Fourier-based approaches for CT-reconstruction numerically, discuss their
advantages and disadvantages and investigate the effect of discretization
errors at different resolutions
Brain-Inspired Computing
This open access book constitutes revised selected papers from the 4th International Workshop on Brain-Inspired Computing, BrainComp 2019, held in Cetraro, Italy, in July 2019. The 11 papers presented in this volume were carefully reviewed and selected for inclusion in this book. They deal with research on brain atlasing, multi-scale models and simulation, HPC and data infra-structures for neuroscience as well as artificial and natural neural architectures
Evaluation of finite difference based asynchronous partial differential equations solver for reacting flows
Next-generation exascale machines with extreme levels of parallelism will
provide massive computing resources for large scale numerical simulations of
complex physical systems at unprecedented parameter ranges. However, novel
numerical methods, scalable algorithms and re-design of current state-of-the
art numerical solvers are required for scaling to these machines with minimal
overheads. One such approach for partial differential equations based solvers
involves computation of spatial derivatives with possibly delayed or
asynchronous data using high-order asynchrony-tolerant (AT) schemes to
facilitate mitigation of communication and synchronization bottlenecks without
affecting the numerical accuracy. In the present study, an effective
methodology of implementing temporal discretization using a multi-stage
Runge-Kutta method with AT schemes is presented. Together these schemes are
used to perform asynchronous simulations of canonical reacting flow problems,
demonstrated in one-dimension including auto-ignition of a premixture, premixed
flame propagation and non-premixed autoignition. Simulation results show that
the AT schemes incur very small numerical errors in all key quantities of
interest including stiff intermediate species despite delayed data at
processing element (PE) boundaries. For simulations of supersonic flows, the
degraded numerical accuracy of well-known shock-resolving WENO (weighted
essentially non-oscillatory) schemes when used with relaxed synchronization is
also discussed. To overcome this loss of accuracy, high-order AT-WENO schemes
are derived and tested on linear and non-linear equations. Finally the novel
AT-WENO schemes are demonstrated in the propagation of a detonation wave with
delays at PE boundaries