176 research outputs found
Guarded Second-Order Logic, Spanning Trees, and Network Flows
According to a theorem of Courcelle monadic second-order logic and guarded
second-order logic (where one can also quantify over sets of edges) have the
same expressive power over the class of all countable -sparse hypergraphs.
In the first part of the present paper we extend this result to hypergraphs of
arbitrary cardinality. In the second part, we present a generalisation dealing
with methods to encode sets of vertices by single vertices
Fiber Orientation Estimation Guided by a Deep Network
Diffusion magnetic resonance imaging (dMRI) is currently the only tool for
noninvasively imaging the brain's white matter tracts. The fiber orientation
(FO) is a key feature computed from dMRI for fiber tract reconstruction.
Because the number of FOs in a voxel is usually small, dictionary-based sparse
reconstruction has been used to estimate FOs with a relatively small number of
diffusion gradients. However, accurate FO estimation in regions with complex FO
configurations in the presence of noise can still be challenging. In this work
we explore the use of a deep network for FO estimation in a dictionary-based
framework and propose an algorithm named Fiber Orientation Reconstruction
guided by a Deep Network (FORDN). FORDN consists of two steps. First, we use a
smaller dictionary encoding coarse basis FOs to represent the diffusion
signals. To estimate the mixture fractions of the dictionary atoms (and thus
coarse FOs), a deep network is designed specifically for solving the sparse
reconstruction problem. Here, the smaller dictionary is used to reduce the
computational cost of training. Second, the coarse FOs inform the final FO
estimation, where a larger dictionary encoding dense basis FOs is used and a
weighted l1-norm regularized least squares problem is solved to encourage FOs
that are consistent with the network output. FORDN was evaluated and compared
with state-of-the-art algorithms that estimate FOs using sparse reconstruction
on simulated and real dMRI data, and the results demonstrate the benefit of
using a deep network for FO estimation.Comment: A shorter version is accepted by MICCAI 201
Sparsity and cosparsity for audio declipping: a flexible non-convex approach
This work investigates the empirical performance of the sparse synthesis
versus sparse analysis regularization for the ill-posed inverse problem of
audio declipping. We develop a versatile non-convex heuristics which can be
readily used with both data models. Based on this algorithm, we report that, in
most cases, the two models perform almost similarly in terms of signal
enhancement. However, the analysis version is shown to be amenable for real
time audio processing, when certain analysis operators are considered. Both
versions outperform state-of-the-art methods in the field, especially for the
severely saturated signals
Tensor completion in hierarchical tensor representations
Compressed sensing extends from the recovery of sparse vectors from
undersampled measurements via efficient algorithms to the recovery of matrices
of low rank from incomplete information. Here we consider a further extension
to the reconstruction of tensors of low multi-linear rank in recently
introduced hierarchical tensor formats from a small number of measurements.
Hierarchical tensors are a flexible generalization of the well-known Tucker
representation, which have the advantage that the number of degrees of freedom
of a low rank tensor does not scale exponentially with the order of the tensor.
While corresponding tensor decompositions can be computed efficiently via
successive applications of (matrix) singular value decompositions, some
important properties of the singular value decomposition do not extend from the
matrix to the tensor case. This results in major computational and theoretical
difficulties in designing and analyzing algorithms for low rank tensor
recovery. For instance, a canonical analogue of the tensor nuclear norm is
NP-hard to compute in general, which is in stark contrast to the matrix case.
In this book chapter we consider versions of iterative hard thresholding
schemes adapted to hierarchical tensor formats. A variant builds on methods
from Riemannian optimization and uses a retraction mapping from the tangent
space of the manifold of low rank tensors back to this manifold. We provide
first partial convergence results based on a tensor version of the restricted
isometry property (TRIP) of the measurement map. Moreover, an estimate of the
number of measurements is provided that ensures the TRIP of a given tensor rank
with high probability for Gaussian measurement maps.Comment: revised version, to be published in Compressed Sensing and Its
Applications (edited by H. Boche, R. Calderbank, G. Kutyniok, J. Vybiral
Automatic structures of bounded degree revisited
The first-order theory of a string automatic structure is known to be
decidable, but there are examples of string automatic structures with
nonelementary first-order theories. We prove that the first-order theory of a
string automatic structure of bounded degree is decidable in doubly exponential
space (for injective automatic presentations, this holds even uniformly). This
result is shown to be optimal since we also present a string automatic
structure of bounded degree whose first-order theory is hard for 2EXPSPACE. We
prove similar results also for tree automatic structures. These findings close
the gaps left open in a previous paper of the second author by improving both,
the lower and the upper bounds.Comment: 26 page
A non-adapted sparse approximation of PDEs with stochastic inputs
We propose a method for the approximation of solutions of PDEs with
stochastic coefficients based on the direct, i.e., non-adapted, sampling of
solutions. This sampling can be done by using any legacy code for the
deterministic problem as a black box. The method converges in probability (with
probabilistic error bounds) as a consequence of sparsity and a concentration of
measure phenomenon on the empirical correlation between samples. We show that
the method is well suited for truly high-dimensional problems (with slow decay
in the spectrum)
Structured Sparsity: Discrete and Convex approaches
Compressive sensing (CS) exploits sparsity to recover sparse or compressible
signals from dimensionality reducing, non-adaptive sensing mechanisms. Sparsity
is also used to enhance interpretability in machine learning and statistics
applications: While the ambient dimension is vast in modern data analysis
problems, the relevant information therein typically resides in a much lower
dimensional space. However, many solutions proposed nowadays do not leverage
the true underlying structure. Recent results in CS extend the simple sparsity
idea to more sophisticated {\em structured} sparsity models, which describe the
interdependency between the nonzero components of a signal, allowing to
increase the interpretability of the results and lead to better recovery
performance. In order to better understand the impact of structured sparsity,
in this chapter we analyze the connections between the discrete models and
their convex relaxations, highlighting their relative advantages. We start with
the general group sparse model and then elaborate on two important special
cases: the dispersive and the hierarchical models. For each, we present the
models in their discrete nature, discuss how to solve the ensuing discrete
problems and then describe convex relaxations. We also consider more general
structures as defined by set functions and present their convex proxies.
Further, we discuss efficient optimization solutions for structured sparsity
problems and illustrate structured sparsity in action via three applications.Comment: 30 pages, 18 figure
Regular symmetry patterns
Symmetry reduction is a well-known approach for alleviating the state explosion problem in model checking. Automatically identifying symmetries in concurrent systems, however, is computationally expensive. We propose a symbolic framework for capturing symmetry patterns in parameterised systems (i.e. an infinite family of finite-state systems): two regular word transducers to represent, respectively, parameterised systems and symmetry patterns. The framework subsumes various types of "symmetry relations" ranging from weaker notions (e.g. simulation preorders) to the strongest notion (i.e. isomorphisms). Our framework enjoys two algorithmic properties: (1) symmetry verification: given a transducer, we can automatically check whether it is a symmetry pattern of a given system, and (2) symmetry synthesis: we can automatically generate a symmetry pattern for a given system in the form of a transducer. Furthermore, our symbolic language allows additional constraints that the symmetry patterns need to satisfy to be easily incorporated in the verification/synthesis. We show how these properties can help identify symmetry patterns in examples like dining philosopher protocols, self-stabilising protocols, and prioritised resource-allocator protocol. In some cases (e.g. Gries's coffee can problem), our technique automatically synthesises a safety-preserving finite approximant, which can then be verified for safety solely using a finite-state model checker.UPMAR
Simple, Accurate, and Robust Nonparametric Blind Super-Resolution
This paper proposes a simple, accurate, and robust approach to single image
nonparametric blind Super-Resolution (SR). This task is formulated as a
functional to be minimized with respect to both an intermediate super-resolved
image and a nonparametric blur-kernel. The proposed approach includes a
convolution consistency constraint which uses a non-blind learning-based SR
result to better guide the estimation process. Another key component is the
unnatural bi-l0-l2-norm regularization imposed on the super-resolved, sharp
image and the blur-kernel, which is shown to be quite beneficial for estimating
the blur-kernel accurately. The numerical optimization is implemented by
coupling the splitting augmented Lagrangian and the conjugate gradient (CG).
Using the pre-estimated blur-kernel, we finally reconstruct the SR image by a
very simple non-blind SR method that uses a natural image prior. The proposed
approach is demonstrated to achieve better performance than the recent method
by Michaeli and Irani [2] in both terms of the kernel estimation accuracy and
image SR quality
Quantization and Compressive Sensing
Quantization is an essential step in digitizing signals, and, therefore, an
indispensable component of any modern acquisition system. This book chapter
explores the interaction of quantization and compressive sensing and examines
practical quantization strategies for compressive acquisition systems.
Specifically, we first provide a brief overview of quantization and examine
fundamental performance bounds applicable to any quantization approach. Next,
we consider several forms of scalar quantizers, namely uniform, non-uniform,
and 1-bit. We provide performance bounds and fundamental analysis, as well as
practical quantizer designs and reconstruction algorithms that account for
quantization. Furthermore, we provide an overview of Sigma-Delta
() quantization in the compressed sensing context, and also
discuss implementation issues, recovery algorithms and performance bounds. As
we demonstrate, proper accounting for quantization and careful quantizer design
has significant impact in the performance of a compressive acquisition system.Comment: 35 pages, 20 figures, to appear in Springer book "Compressed Sensing
and Its Applications", 201
- …