44 research outputs found
Game theoretic aspects of distributed spectral coordination with application to DSL networks
In this paper we use game theoretic techniques to study the value of
cooperation in distributed spectrum management problems. We show that the
celebrated iterative water-filling algorithm is subject to the prisoner's
dilemma and therefore can lead to severe degradation of the achievable rate
region in an interference channel environment. We also provide thorough
analysis of a simple two bands near-far situation where we are able to provide
closed form tight bounds on the rate region of both fixed margin iterative
water filling (FM-IWF) and dynamic frequency division multiplexing (DFDM)
methods. This is the only case where such analytic expressions are known and
all previous studies included only simulated results of the rate region. We
then propose an alternative algorithm that alleviates some of the drawbacks of
the IWF algorithm in near-far scenarios relevant to DSL access networks. We
also provide experimental analysis based on measured DSL channels of both
algorithms as well as the centralized optimum spectrum management
Competitive Spectrum Management with Incomplete Information
This paper studies an interference interaction (game) between selfish and
independent wireless communication systems in the same frequency band. Each
system (player) has incomplete information about the other player's channel
conditions. A trivial Nash equilibrium point in this game is where players
mutually full spread (FS) their transmit spectrum and interfere with each
other. This point may lead to poor spectrum utilization from a global network
point of view and even for each user individually.
In this paper, we provide a closed form expression for a non pure-FS
epsilon-Nash equilibrium point; i.e., an equilibrium point where players choose
FDM for some channel realizations and FS for the others. We show that operating
in this non pure-FS epsilon-Nash equilibrium point increases each user's
throughput and therefore improves the spectrum utilization, and demonstrate
that this performance gain can be substantial. Finally, important insights are
provided into the behaviour of selfish and rational wireless users as a
function of the channel parameters such as fading probabilities, the
interference-to-signal ratio
The use of cluster quality for track fitting in the CSC detector
The new particle accelerators and its experiments create a challenging data
processing environment, characterized by large amount of data where only small
portion of it carry the expected new scientific information. Modern detectors,
such as the Cathode Strip Chamber (CSC), achieve high accuracy of coordinate
measurements (between 50 to 70 microns). However, heavy physical backgrounds
can decrease the accuracy significantly. In the presence of such background,
the charge induced over adjacent CSC strips (cluster) is different from the
ideal Matheison distribution. The traditional least squares method which takes
the same ideal position error for all clusters loses its optimal properties on
contaminated data. A new technique that calculates the cluster quality and uses
it to improve the track fitting results is suggested. The algorithm is applied
on test beam data, and its performance is compared to other fitting methods. It
is shown that the suggested algorithm improves the fitting performance
significantly.Comment: Proceedings of 2006 IEEE NSS, San Diego, California, USA, November
200
Learning to Bound: A Generative Cram\'er-Rao Bound
The Cram\'er-Rao bound (CRB), a well-known lower bound on the performance of
any unbiased parameter estimator, has been used to study a wide variety of
problems. However, to obtain the CRB, requires an analytical expression for the
likelihood of the measurements given the parameters, or equivalently a precise
and explicit statistical model for the data. In many applications, such a model
is not available. Instead, this work introduces a novel approach to approximate
the CRB using data-driven methods, which removes the requirement for an
analytical statistical model. This approach is based on the recent success of
deep generative models in modeling complex, high-dimensional distributions.
Using a learned normalizing flow model, we model the distribution of the
measurements and obtain an approximation of the CRB, which we call Generative
Cram\'er-Rao Bound (GCRB). Numerical experiments on simple problems validate
this approach, and experiments on two image processing tasks of image denoising
and edge detection with a learned camera noise model demonstrate its power and
benefits
A suboptimal estimator of the sampling jitter variance using the bispectrum
We consider the problem of estimating parameters of an irregular sampling process defined as a uniform sampling process in which the deviations from the nominal sampling times constitute a random IID process (jitter). Emphasis is placed on estimating the variance of the jitter, based on observation of samples taken from a continuous band-limited third-order stationary process. We derive an estimation procedure which uses the bispectrum estimates of a process with a priori known bispectrum. Derivation of the generalized likelihood ratio in the bispectral domain, leads to a statistic with which a bispectrum-based maximum likelihood estimation can be done. We propose a suboptimal estimator, and show that it is asymptotically unbiased and consistent. The dependence of the estimator's performance on the data length and the skewness is studied for a specific example. The estimator's variance is compared to the bispectrum-based Cramer-Rao bound (BCRB), and is shown to approach it for sufficiently large data length or skewness. Computer simulations verify the effectiveness of the proposed estimation method for small jitter.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/31464/1/0000386.pd
Feasibility study of parameter estimation of random sampling jitter using the bispectrum
An actual sampling process can be modeled as a random process, which consists of the regular (uniform) deterministic sampling process plus an error in the sampling times which constitutes a zero-mean noise (the jitter). In this paper we discuss the problem of estimating the jitter process. By assuming that the jitter process is an i.i.d. one, with standard deviation that is small compared to the regular sampling time, we show that the variance of the jitter process can be estimated from the n th order spectrum of the sampled data, n =2, 3, i.e., the jitter variance can be extracted from the 2nd-order spectrum or the 3rd-order spectrum (the bispectrum) of the sampled data, provided the continuous signal spectrum is known. However when the signal skewness exceeds a certain level, the potential performance of the bispectrum-based estimation is better than that of the spectrum-based estimation. Moreover, the former can also provide jitter variance estimates when the continuous signal spectrum is unknown while the latter cannot. This suggests that the bispectrum of the sampled data is potentially better for estimating any parameter of the sampling jitter process, once the signal skewness is sufficiently large.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/43577/1/34_2005_Article_BF01183740.pd