1,506 research outputs found
A mixed regularization approach for sparse simultaneous approximation of parameterized PDEs
We present and analyze a novel sparse polynomial technique for the
simultaneous approximation of parameterized partial differential equations
(PDEs) with deterministic and stochastic inputs. Our approach treats the
numerical solution as a jointly sparse reconstruction problem through the
reformulation of the standard basis pursuit denoising, where the set of jointly
sparse vectors is infinite. To achieve global reconstruction of sparse
solutions to parameterized elliptic PDEs over both physical and parametric
domains, we combine the standard measurement scheme developed for compressed
sensing in the context of bounded orthonormal systems with a novel mixed-norm
based regularization method that exploits both energy and sparsity. In
addition, we are able to prove that, with minimal sample complexity, error
estimates comparable to the best -term and quasi-optimal approximations are
achievable, while requiring only a priori bounds on polynomial truncation error
with respect to the energy norm. Finally, we perform extensive numerical
experiments on several high-dimensional parameterized elliptic PDE models to
demonstrate the superior recovery properties of the proposed approach.Comment: 23 pages, 4 figure
Efficient measurement of quantum dynamics via compressive sensing
The resources required to characterise the dynamics of engineered quantum
systems-such as quantum computers and quantum sensors-grow exponentially with
system size. Here we adapt techniques from compressive sensing to exponentially
reduce the experimental configurations required for quantum process tomography.
Our method is applicable to dynamical processes that are known to be
nearly-sparse in a certain basis and it can be implemented using only
single-body preparations and measurements. We perform efficient, high-fidelity
estimation of process matrices on an experiment attempting to implement a
photonic two-qubit logic-gate. The data base is obtained under various
decoherence strengths. We find that our technique is both accurate and noise
robust, thus removing a key roadblock to the development and scaling of quantum
technologies.Comment: New title and authors. A new experimental section. Significant
rewrite of the theor
-Analysis Minimization and Generalized (Co-)Sparsity: When Does Recovery Succeed?
This paper investigates the problem of signal estimation from undersampled
noisy sub-Gaussian measurements under the assumption of a cosparse model. Based
on generalized notions of sparsity, we derive novel recovery guarantees for the
-analysis basis pursuit, enabling highly accurate predictions of its
sample complexity. The corresponding bounds on the number of required
measurements do explicitly depend on the Gram matrix of the analysis operator
and therefore particularly account for its mutual coherence structure. Our
findings defy conventional wisdom which promotes the sparsity of analysis
coefficients as the crucial quantity to study. In fact, this common paradigm
breaks down completely in many situations of practical interest, for instance,
when applying a redundant (multilevel) frame as analysis prior. By extensive
numerical experiments, we demonstrate that, in contrast, our theoretical
sampling-rate bounds reliably capture the recovery capability of various
examples, such as redundant Haar wavelets systems, total variation, or random
frames. The proofs of our main results build upon recent achievements in the
convex geometry of data mining problems. More precisely, we establish a
sophisticated upper bound on the conic Gaussian mean width that is associated
with the underlying -analysis polytope. Due to a novel localization
argument, it turns out that the presented framework naturally extends to stable
recovery, allowing us to incorporate compressible coefficient sequences as
well
Deep Learning for Inverse Problems: Performance Characterizations, Learning Algorithms, and Applications
Deep learning models have witnessed immense empirical success over the last decade. However, in spite of their widespread adoption, a profound understanding of the generalization behaviour of these over-parameterized architectures is still missing. In this thesis, we provide one such way via a data-dependent characterizations of the generalization capability of deep neural networks based data representations. In particular, by building on the algorithmic robustness framework, we offer a generalisation error bound that encapsulates key ingredients associated with the learning problem such as the complexity of the data space, the cardinality of the training set, and the Lipschitz properties of a deep neural network.
We then specialize our analysis to a specific class of model based regression problems, namely the inverse problems. These problems often come with well defined forward operators that map variables of interest to the observations. It is therefore natural to ask whether such knowledge of the forward operator can be exploited in deep learning approaches increasingly used to solve inverse problems. We offer a generalisation error bound that -- apart from the other factors -- depends on the Jacobian of the composition of the forward operator with the neural network.
Motivated by our analysis, we then propose a `plug-and-play' regulariser that leverages the knowledge of the forward map to improve the generalization of the network. We likewise also provide a method allowing us to tightly upper bound the norms of the Jacobians of the relevant operators that is much more {computationally} efficient than existing ones. We demonstrate the efficacy of our model-aware regularised deep learning algorithms against other state-of-the-art approaches on inverse problems involving various sub-sampling operators such as those used in classical compressed sensing setup and inverse problems that are of interest in the biomedical imaging setup
- …