7,168 research outputs found
Tensor Computation: A New Framework for High-Dimensional Problems in EDA
Many critical EDA problems suffer from the curse of dimensionality, i.e. the
very fast-scaling computational burden produced by large number of parameters
and/or unknown variables. This phenomenon may be caused by multiple spatial or
temporal factors (e.g. 3-D field solvers discretizations and multi-rate circuit
simulation), nonlinearity of devices and circuits, large number of design or
optimization parameters (e.g. full-chip routing/placement and circuit sizing),
or extensive process variations (e.g. variability/reliability analysis and
design for manufacturability). The computational challenges generated by such
high dimensional problems are generally hard to handle efficiently with
traditional EDA core algorithms that are based on matrix and vector
computation. This paper presents "tensor computation" as an alternative general
framework for the development of efficient EDA algorithms and tools. A tensor
is a high-dimensional generalization of a matrix and a vector, and is a natural
choice for both storing and solving efficiently high-dimensional EDA problems.
This paper gives a basic tutorial on tensors, demonstrates some recent examples
of EDA applications (e.g., nonlinear circuit modeling and high-dimensional
uncertainty quantification), and suggests further open EDA problems where the
use of tensor computation could be of advantage.Comment: 14 figures. Accepted by IEEE Trans. CAD of Integrated Circuits and
System
Joint Image Reconstruction and Segmentation Using the Potts Model
We propose a new algorithmic approach to the non-smooth and non-convex Potts
problem (also called piecewise-constant Mumford-Shah problem) for inverse
imaging problems. We derive a suitable splitting into specific subproblems that
can all be solved efficiently. Our method does not require a priori knowledge
on the gray levels nor on the number of segments of the reconstruction.
Further, it avoids anisotropic artifacts such as geometric staircasing. We
demonstrate the suitability of our method for joint image reconstruction and
segmentation. We focus on Radon data, where we in particular consider limited
data situations. For instance, our method is able to recover all segments of
the Shepp-Logan phantom from angular views only. We illustrate the
practical applicability on a real PET dataset. As further applications, we
consider spherical Radon data as well as blurred data
Group-Sparse Signal Denoising: Non-Convex Regularization, Convex Optimization
Convex optimization with sparsity-promoting convex regularization is a
standard approach for estimating sparse signals in noise. In order to promote
sparsity more strongly than convex regularization, it is also standard practice
to employ non-convex optimization. In this paper, we take a third approach. We
utilize a non-convex regularization term chosen such that the total cost
function (consisting of data consistency and regularization terms) is convex.
Therefore, sparsity is more strongly promoted than in the standard convex
formulation, but without sacrificing the attractive aspects of convex
optimization (unique minimum, robust algorithms, etc.). We use this idea to
improve the recently developed 'overlapping group shrinkage' (OGS) algorithm
for the denoising of group-sparse signals. The algorithm is applied to the
problem of speech enhancement with favorable results in terms of both SNR and
perceptual quality.Comment: 14 pages, 11 figure
Semi-sparsity Priors for Image Structure Analysis and Extraction
Image structure-texture decomposition is a long-standing and fundamental
problem in both image processing and computer vision fields. In this paper, we
propose a generalized semi-sparse regularization framework for image structural
analysis and extraction, which allows us to decouple the underlying image
structures from complicated textural backgrounds. Combining with different
textural analysis models, such a regularization receives favorable properties
differing from many traditional methods. We demonstrate that it is not only
capable of preserving image structures without introducing notorious staircase
artifacts in polynomial-smoothing surfaces but is also applicable for
decomposing image textures with strong oscillatory patterns. Moreover, we also
introduce an efficient numerical solution based on an alternating direction
method of multipliers (ADMM) algorithm, which gives rise to a simple and
maneuverable way for image structure-texture decomposition. The versatility of
the proposed method is finally verified by a series of experimental results
with the capability of producing comparable or superior image decomposition
results against cutting-edge methods.Comment: 18 page
Spherical deconvolution of multichannel diffusion MRI data with non-Gaussian noise models and spatial regularization
Spherical deconvolution (SD) methods are widely used to estimate the
intra-voxel white-matter fiber orientations from diffusion MRI data. However,
while some of these methods assume a zero-mean Gaussian distribution for the
underlying noise, its real distribution is known to be non-Gaussian and to
depend on the methodology used to combine multichannel signals. Indeed, the two
prevailing methods for multichannel signal combination lead to Rician and
noncentral Chi noise distributions. Here we develop a Robust and Unbiased
Model-BAsed Spherical Deconvolution (RUMBA-SD) technique, intended to deal with
realistic MRI noise, based on a Richardson-Lucy (RL) algorithm adapted to
Rician and noncentral Chi likelihood models. To quantify the benefits of using
proper noise models, RUMBA-SD was compared with dRL-SD, a well-established
method based on the RL algorithm for Gaussian noise. Another aim of the study
was to quantify the impact of including a total variation (TV) spatial
regularization term in the estimation framework. To do this, we developed TV
spatially-regularized versions of both RUMBA-SD and dRL-SD algorithms. The
evaluation was performed by comparing various quality metrics on 132
three-dimensional synthetic phantoms involving different inter-fiber angles and
volume fractions, which were contaminated with noise mimicking patterns
generated by data processing in multichannel scanners. The results demonstrate
that the inclusion of proper likelihood models leads to an increased ability to
resolve fiber crossings with smaller inter-fiber angles and to better detect
non-dominant fibers. The inclusion of TV regularization dramatically improved
the resolution power of both techniques. The above findings were also verified
in brain data
Cross-Term-Free Time-Frequency Distribution Reconstruction via Lifted Projections
Cataloged from PDF version of article.A crucial aspect of time-frequency (TF) analysis is the identification of separate components in a multicomponent signal. The Wigner-Ville distribution is the classical tool for representing such signals, but it suffers from cross-terms. Other methods, which are members of Cohen's class of distributions, also aim to remove the cross-terms by masking the ambiguity function (AF), but they result in reduced resolution. Most practical time-varying signals are in the form of weighted trajectories on the TF plane, and many others are sparse in nature. Therefore, in recent studies the problem is cast as TF distribution reconstruction using a subset of AF domain coefficients and sparsity assumption. Sparsity can be achieved by constraining or minimizing the l(1) norm. In this article, an l(1) minimization approach based on projections onto convex sets is proposed to obtain a high-resolution, cross-term-free TF distribution for a given signal. The new method does not require any parameter adjustment to obtain a solution. Experimental results are presented
A Spectral Graph Uncertainty Principle
The spectral theory of graphs provides a bridge between classical signal
processing and the nascent field of graph signal processing. In this paper, a
spectral graph analogy to Heisenberg's celebrated uncertainty principle is
developed. Just as the classical result provides a tradeoff between signal
localization in time and frequency, this result provides a fundamental tradeoff
between a signal's localization on a graph and in its spectral domain. Using
the eigenvectors of the graph Laplacian as a surrogate Fourier basis,
quantitative definitions of graph and spectral "spreads" are given, and a
complete characterization of the feasibility region of these two quantities is
developed. In particular, the lower boundary of the region, referred to as the
uncertainty curve, is shown to be achieved by eigenvectors associated with the
smallest eigenvalues of an affine family of matrices. The convexity of the
uncertainty curve allows it to be found to within by a fast
approximation algorithm requiring typically sparse
eigenvalue evaluations. Closed-form expressions for the uncertainty curves for
some special classes of graphs are derived, and an accurate analytical
approximation for the expected uncertainty curve of Erd\H{o}s-R\'enyi random
graphs is developed. These theoretical results are validated by numerical
experiments, which also reveal an intriguing connection between diffusion
processes on graphs and the uncertainty bounds.Comment: 40 pages, 8 figure
- …