120 research outputs found
Cramer-Rao Bounds for Joint RSS/DoA-Based Primary-User Localization in Cognitive Radio Networks
Knowledge about the location of licensed primary-users (PU) could enable
several key features in cognitive radio (CR) networks including improved
spatio-temporal sensing, intelligent location-aware routing, as well as aiding
spectrum policy enforcement. In this paper we consider the achievable accuracy
of PU localization algorithms that jointly utilize received-signal-strength
(RSS) and direction-of-arrival (DoA) measurements by evaluating the Cramer-Rao
Bound (CRB). Previous works evaluate the CRB for RSS-only and DoA-only
localization algorithms separately and assume DoA estimation error variance is
a fixed constant or rather independent of RSS. We derive the CRB for joint
RSS/DoA-based PU localization algorithms based on the mathematical model of DoA
estimation error variance as a function of RSS, for a given CR placement. The
bound is compared with practical localization algorithms and the impact of
several key parameters, such as number of nodes, number of antennas and
samples, channel shadowing variance and correlation distance, on the achievable
accuracy are thoroughly analyzed and discussed. We also derive the closed-form
asymptotic CRB for uniform random CR placement, and perform theoretical and
numerical studies on the required number of CRs such that the asymptotic CRB
tightly approximates the numerical integration of the CRB for a given
placement.Comment: 20 pages, 11 figures, 1 table, submitted to IEEE Transactions on
Wireless Communication
Parameter selection in sparsity-driven SAR imaging
We consider a recently developed sparsity-driven synthetic aperture radar (SAR) imaging approach which can produce superresolution, feature-enhanced images. However, this regularization-based approach requires the selection of a hyper-parameter in order to generate such high-quality images. In this paper we present a number of techniques for automatically selecting the hyper-parameter
involved in this problem. In particular, we propose and develop numerical procedures for the use of Stein’s unbiased risk estimation, generalized cross-validation, and L-curve techniques for automatic parameter choice. We demonstrate and compare the effectiveness of these procedures through experiments based on both simple synthetic scenes, as well as electromagnetically simulated realistic data. Our results suggest that sparsity-driven SAR imaging coupled with the proposed automatic parameter choice procedures offers significant improvements over conventional SAR imaging
Some nonasymptotic results on resampling in high dimension, I: Confidence regions, II: Multiple tests
We study generalized bootstrap confidence regions for the mean of a random
vector whose coordinates have an unknown dependency structure. The random
vector is supposed to be either Gaussian or to have a symmetric and bounded
distribution. The dimensionality of the vector can possibly be much larger than
the number of observations and we focus on a nonasymptotic control of the
confidence level, following ideas inspired by recent results in learning
theory. We consider two approaches, the first based on a concentration
principle (valid for a large class of resampling weights) and the second on a
resampled quantile, specifically using Rademacher weights. Several intermediate
results established in the approach based on concentration principles are of
interest in their own right. We also discuss the question of accuracy when
using Monte Carlo approximations of the resampled quantities.Comment: Published in at http://dx.doi.org/10.1214/08-AOS667;
http://dx.doi.org/10.1214/08-AOS668 the Annals of Statistics
(http://www.imstat.org/aos/) by the Institute of Mathematical Statistics
(http://www.imstat.org
Randomized Smoothing for Stochastic Optimization
We analyze convergence rates of stochastic optimization procedures for
non-smooth convex optimization problems. By combining randomized smoothing
techniques with accelerated gradient methods, we obtain convergence rates of
stochastic optimization procedures, both in expectation and with high
probability, that have optimal dependence on the variance of the gradient
estimates. To the best of our knowledge, these are the first variance-based
rates for non-smooth optimization. We give several applications of our results
to statistical estimation problems, and provide experimental results that
demonstrate the effectiveness of the proposed algorithms. We also describe how
a combination of our algorithm with recent work on decentralized optimization
yields a distributed stochastic optimization algorithm that is order-optimal.Comment: 39 pages, 3 figure
Low Complexity Regularization of Linear Inverse Problems
Inverse problems and regularization theory is a central theme in contemporary
signal processing, where the goal is to reconstruct an unknown signal from
partial indirect, and possibly noisy, measurements of it. A now standard method
for recovering the unknown signal is to solve a convex optimization problem
that enforces some prior knowledge about its structure. This has proved
efficient in many problems routinely encountered in imaging sciences,
statistics and machine learning. This chapter delivers a review of recent
advances in the field where the regularization prior promotes solutions
conforming to some notion of simplicity/low-complexity. These priors encompass
as popular examples sparsity and group sparsity (to capture the compressibility
of natural signals and images), total variation and analysis sparsity (to
promote piecewise regularity), and low-rank (as natural extension of sparsity
to matrix-valued data). Our aim is to provide a unified treatment of all these
regularizations under a single umbrella, namely the theory of partial
smoothness. This framework is very general and accommodates all low-complexity
regularizers just mentioned, as well as many others. Partial smoothness turns
out to be the canonical way to encode low-dimensional models that can be linear
spaces or more general smooth manifolds. This review is intended to serve as a
one stop shop toward the understanding of the theoretical properties of the
so-regularized solutions. It covers a large spectrum including: (i) recovery
guarantees and stability to noise, both in terms of -stability and
model (manifold) identification; (ii) sensitivity analysis to perturbations of
the parameters involved (in particular the observations), with applications to
unbiased risk estimation ; (iii) convergence properties of the forward-backward
proximal splitting scheme, that is particularly well suited to solve the
corresponding large-scale regularized optimization problem
Mathematical Foundations of Machine Learning (hybrid meeting)
Machine learning has achieved
remarkable successes in various applications, but there is wide agreement that a mathematical theory for deep learning is missing. Recently, some first mathematical results have been derived in different areas such as mathematical statistics and statistical learning. Any mathematical theory of machine learning will have to combine tools from different fields such as nonparametric statistics, high-dimensional statistics, empirical process theory and approximation theory. The main objective of the workshop was to bring together leading researchers contributing to the mathematics of machine learning.
A focus of the workshop was on theory for deep neural networks. Mathematically speaking, neural networks define function classes with a rich mathematical structure that are extremely difficult to analyze because of non-linearity in the parameters. Until very recently, most existing theoretical results could not cope with many of the distinctive characteristics of deep networks such as multiple hidden layers or the ReLU activation function. Other topics of the workshop are procedures for quantifying the uncertainty of machine learning methods and the mathematics of data privacy
Modern Nonparametric Statistics: Going Beyond Asymptotic Minimax
During the years 1975-1990 a major emphasis in nonparametric estimation was put on computing the asymptotic minimax risk for many classes of functions. Modern statistical practice indicates some serious limitations of the asymptotic minimax approach and calls for some new ideas and methods which can cope with the numerous challenges brought to statisticians by modern sets of data
On the Performance Bound of Sparse Estimation with Sensing Matrix Perturbation
This paper focusses on the sparse estimation in the situation where both the
the sensing matrix and the measurement vector are corrupted by additive
Gaussian noises. The performance bound of sparse estimation is analyzed and
discussed in depth. Two types of lower bounds, the constrained Cram\'{e}r-Rao
bound (CCRB) and the Hammersley-Chapman-Robbins bound (HCRB), are discussed. It
is shown that the situation with sensing matrix perturbation is more complex
than the one with only measurement noise. For the CCRB, its closed-form
expression is deduced. It demonstrates a gap between the maximal and nonmaximal
support cases. It is also revealed that a gap lies between the CCRB and the MSE
of the oracle pseudoinverse estimator, but it approaches zero asymptotically
when the problem dimensions tend to infinity. For a tighter bound, the HCRB,
despite of the difficulty in obtaining a simple expression for general sensing
matrix, a closed-form expression in the unit sensing matrix case is derived for
a qualitative study of the performance bound. It is shown that the gap between
the maximal and nonmaximal cases is eliminated for the HCRB. Numerical
simulations are performed to verify the theoretical results in this paper.Comment: 32 pages, 8 Figures, 1 Tabl
- …