2,738 research outputs found
Analysis of DCE-MRI Data using a Nonnegative Elastic Net
We present a nonnegative Elastic Net approach for the analysis of Dynamic Contrast-Enhanced Magnetic Resonance Imaging data. A multi-compartment approach is considered, which is translated into a (restricted) least square model
selection problem. This is done by using a set of basis functions for a given set of candidate rate constants. The form of the basis functions is derived from a kinetic model and thus describes the contribution of some compartment. Using the Elastic Net estimator, we chose clusters of basis functions, and hence, rate constants of compartments. As further challenge, the estimator has to be restricted to positive regression parameters, which correspond to transfer rates of the compartments. The proposed estimation method is applied to an in-vivo data set
A Convex Reconstruction Model for X-ray Tomographic Imaging with Uncertain Flat-fields
Classical methods for X-ray computed tomography are based on the assumption
that the X-ray source intensity is known, but in practice, the intensity is
measured and hence uncertain. Under normal operating conditions, when the
exposure time is sufficiently high, this kind of uncertainty typically has a
negligible effect on the reconstruction quality. However, in time- or
dose-limited applications such as dynamic CT, this uncertainty may cause severe
and systematic artifacts known as ring artifacts. By carefully modeling the
measurement process and by taking uncertainties into account, we derive a new
convex model that leads to improved reconstructions despite poor quality
measurements. We demonstrate the effectiveness of the methodology based on
simulated and real data sets.Comment: Accepted at IEEE Transactions on Computational Imagin
Relative Entropy Relaxations for Signomial Optimization
Signomial programs (SPs) are optimization problems specified in terms of
signomials, which are weighted sums of exponentials composed with linear
functionals of a decision variable. SPs are non-convex optimization problems in
general, and families of NP-hard problems can be reduced to SPs. In this paper
we describe a hierarchy of convex relaxations to obtain successively tighter
lower bounds of the optimal value of SPs. This sequence of lower bounds is
computed by solving increasingly larger-sized relative entropy optimization
problems, which are convex programs specified in terms of linear and relative
entropy functions. Our approach relies crucially on the observation that the
relative entropy function -- by virtue of its joint convexity with respect to
both arguments -- provides a convex parametrization of certain sets of globally
nonnegative signomials with efficiently computable nonnegativity certificates
via the arithmetic-geometric-mean inequality. By appealing to representation
theorems from real algebraic geometry, we show that our sequences of lower
bounds converge to the global optima for broad classes of SPs. Finally, we also
demonstrate the effectiveness of our methods via numerical experiments
Robust computation of linear models by convex relaxation
Consider a dataset of vector-valued observations that consists of noisy
inliers, which are explained well by a low-dimensional subspace, along with
some number of outliers. This work describes a convex optimization problem,
called REAPER, that can reliably fit a low-dimensional model to this type of
data. This approach parameterizes linear subspaces using orthogonal projectors,
and it uses a relaxation of the set of orthogonal projectors to reach the
convex formulation. The paper provides an efficient algorithm for solving the
REAPER problem, and it documents numerical experiments which confirm that
REAPER can dependably find linear structure in synthetic and natural data. In
addition, when the inliers lie near a low-dimensional subspace, there is a
rigorous theory that describes when REAPER can approximate this subspace.Comment: Formerly titled "Robust computation of linear models, or How to find
a needle in a haystack
Quantization as Histogram Segmentation: Optimal Scalar Quantizer Design in Network Systems
An algorithm for scalar quantizer design on discrete-alphabet sources is proposed. The proposed algorithm can be used to design fixed-rate and entropy-constrained conventional scalar quantizers, multiresolution scalar quantizers, multiple description scalar quantizers, and Wyner–Ziv scalar quantizers. The algorithm guarantees globally optimal solutions for conventional fixed-rate scalar quantizers and entropy-constrained scalar quantizers. For the other coding scenarios, the algorithm yields the best code among all codes that meet a given convexity constraint. In all cases, the algorithm run-time is polynomial in the size of the source alphabet. The algorithm derivation arises from a demonstration of the connection between scalar quantization, histogram segmentation, and the shortest path problem in a certain directed acyclic graph
- …