659 research outputs found
Generalized SURE for Exponential Families: Applications to Regularization
Stein's unbiased risk estimate (SURE) was proposed by Stein for the
independent, identically distributed (iid) Gaussian model in order to derive
estimates that dominate least-squares (LS). In recent years, the SURE criterion
has been employed in a variety of denoising problems for choosing
regularization parameters that minimize an estimate of the mean-squared error
(MSE). However, its use has been limited to the iid case which precludes many
important applications. In this paper we begin by deriving a SURE counterpart
for general, not necessarily iid distributions from the exponential family.
This enables extending the SURE design technique to a much broader class of
problems. Based on this generalization we suggest a new method for choosing
regularization parameters in penalized LS estimators. We then demonstrate its
superior performance over the conventional generalized cross validation
approach and the discrepancy method in the context of image deblurring and
deconvolution. The SURE technique can also be used to design estimates without
predefining their structure. However, allowing for too many free parameters
impairs the performance of the resulting estimates. To address this inherent
tradeoff we propose a regularized SURE objective. Based on this design
criterion, we derive a wavelet denoising strategy that is similar in sprit to
the standard soft-threshold approach but can lead to improved MSE performance.Comment: to appear in the IEEE Transactions on Signal Processin
A Semidefinite Programming Approach to Optimal Unambiguous Discrimination of Quantum States
In this paper we consider the problem of unambiguous discrimination between a
set of linearly independent pure quantum states. We show that the design of the
optimal measurement that minimizes the probability of an inconclusive result
can be formulated as a semidefinite programming problem. Based on this
formulation, we develop a set of necessary and sufficient conditions for an
optimal quantum measurement. We show that the optimal measurement can be
computed very efficiently in polynomial time by exploiting the many well-known
algorithms for solving semidefinite programs, which are guaranteed to converge
to the global optimum.
Using the general conditions for optimality, we derive necessary and
sufficient conditions so that the measurement that results in an equal
probability of an inconclusive result for each one of the quantum states is
optimal. We refer to this measurement as the equal-probability measurement
(EPM). We then show that for any state set, the prior probabilities of the
states can be chosen such that the EPM is optimal.
Finally, we consider state sets with strong symmetry properties and equal
prior probabilities for which the EPM is optimal. We first consider
geometrically uniform state sets that are defined over a group of unitary
matrices and are generated by a single generating vector. We then consider
compound geometrically uniform state sets which are generated by a group of
unitary matrices using multiple generating vectors, where the generating
vectors satisfy a certain (weighted) norm constraint.Comment: To appear in IEEE Transactions on Information Theor
Time Delay Estimation from Low Rate Samples: A Union of Subspaces Approach
Time delay estimation arises in many applications in which a multipath medium
has to be identified from pulses transmitted through the channel. Various
approaches have been proposed in the literature to identify time delays
introduced by multipath environments. However, these methods either operate on
the analog received signal, or require high sampling rates in order to achieve
reasonable time resolution. In this paper, our goal is to develop a unified
approach to time delay estimation from low rate samples of the output of a
multipath channel. Our methods result in perfect recovery of the multipath
delays from samples of the channel output at the lowest possible rate, even in
the presence of overlapping transmitted pulses. This rate depends only on the
number of multipath components and the transmission rate, but not on the
bandwidth of the probing signal. In addition, our development allows for a
variety of different sampling methods. By properly manipulating the low-rate
samples, we show that the time delays can be recovered using the well-known
ESPRIT algorithm. Combining results from sampling theory with those obtained in
the context of direction of arrival estimation methods, we develop necessary
and sufficient conditions on the transmitted pulse and the sampling functions
in order to ensure perfect recovery of the channel parameters at the minimal
possible rate. Our results can be viewed in a broader context, as a sampling
theorem for analog signals defined over an infinite union of subspaces
On Conditions for Uniqueness in Sparse Phase Retrieval
The phase retrieval problem has a long history and is an important problem in
many areas of optics. Theoretical understanding of phase retrieval is still
limited and fundamental questions such as uniqueness and stability of the
recovered solution are not yet fully understood. This paper provides several
additions to the theoretical understanding of sparse phase retrieval. In
particular we show that if the measurement ensemble can be chosen freely, as
few as 4k-1 phaseless measurements suffice to guarantee uniqueness of a
k-sparse M-dimensional real solution. We also prove that 2(k^2-k+1) Fourier
magnitude measurements are sufficient under rather general conditions
From Theory to Practice: Sub-Nyquist Sampling of Sparse Wideband Analog Signals
Conventional sub-Nyquist sampling methods for analog signals exploit prior
information about the spectral support. In this paper, we consider the
challenging problem of blind sub-Nyquist sampling of multiband signals, whose
unknown frequency support occupies only a small portion of a wide spectrum. Our
primary design goals are efficient hardware implementation and low
computational load on the supporting digital processing. We propose a system,
named the modulated wideband converter, which first multiplies the analog
signal by a bank of periodic waveforms. The product is then lowpass filtered
and sampled uniformly at a low rate, which is orders of magnitude smaller than
Nyquist. Perfect recovery from the proposed samples is achieved under certain
necessary and sufficient conditions. We also develop a digital architecture,
which allows either reconstruction of the analog input, or processing of any
band of interest at a low rate, that is, without interpolating to the high
Nyquist rate. Numerical simulations demonstrate many engineering aspects:
robustness to noise and mismodeling, potential hardware simplifications,
realtime performance for signals with time-varying support and stability to
quantization effects. We compare our system with two previous approaches:
periodic nonuniform sampling, which is bandwidth limited by existing hardware
devices, and the random demodulator, which is restricted to discrete multitone
signals and has a high computational load. In the broader context of Nyquist
sampling, our scheme has the potential to break through the bandwidth barrier
of state-of-the-art analog conversion technologies such as interleaved
converters.Comment: 17 pages, 12 figures, to appear in IEEE Journal of Selected Topics in
Signal Processing, the special issue on Compressed Sensin
- …