725 research outputs found
Estimation of the Number of Sources in Unbalanced Arrays via Information Theoretic Criteria
Estimating the number of sources impinging on an array of sensors is a well
known and well investigated problem. A common approach for solving this problem
is to use an information theoretic criterion, such as Minimum Description
Length (MDL) or the Akaike Information Criterion (AIC). The MDL estimator is
known to be a consistent estimator, robust against deviations from the Gaussian
assumption, and non-robust against deviations from the point source and/or
temporally or spatially white additive noise assumptions. Over the years
several alternative estimation algorithms have been proposed and tested.
Usually, these algorithms are shown, using computer simulations, to have
improved performance over the MDL estimator, and to be robust against
deviations from the assumed spatial model. Nevertheless, these robust
algorithms have high computational complexity, requiring several
multi-dimensional searches.
In this paper, motivated by real life problems, a systematic approach toward
the problem of robust estimation of the number of sources using information
theoretic criteria is taken. An MDL type estimator that is robust against
deviation from assumption of equal noise level across the array is studied. The
consistency of this estimator, even when deviations from the equal noise level
assumption occur, is proven. A novel low-complexity implementation method
avoiding the need for multi-dimensional searches is presented as well, making
this estimator a favorable choice for practical applications.Comment: To appear in the IEEE Transactions on Signal Processin
Model-order selection in statistical shape models
Statistical shape models enhance machine learning algorithms providing prior
information about deformation. A Point Distribution Model (PDM) is a popular
landmark-based statistical shape model for segmentation. It requires choosing a
model order, which determines how much of the variation seen in the training
data is accounted for by the PDM. A good choice of the model order depends on
the number of training samples and the noise level in the training data set.
Yet the most common approach for choosing the model order simply keeps a
predetermined percentage of the total shape variation. In this paper, we
present a technique for choosing the model order based on information-theoretic
criteria, and we show empirical evidence that the model order chosen by this
technique provides a good trade-off between over- and underfitting.Comment: To appear in 2018 IEEE International Workshop on Machine Learning for
Signal Processing, Sept.\ 17--20, 2018, Aalborg, Denmar
Tangential Large Scale Structure as a Standard Ruler: Curvature Parameters from Quasars
Several observational analyses suggest that matter is spatially structured at
at low redshifts. This peak in the power spectrum
provides a standard ruler in comoving space which can be used to compare the
local geometry at high and low redshifts, thereby constraining the curvature
parameters.
It is shown here that this power spectrum peak is present in the observed
quasar distribution at : qualitatively, via wedge diagrams which
clearly show a void-like structure, and quantitatively, via one-dimensional
Fourier analysis of the quasars' tangential distribution. The sample studied
here contains 812 quasars.
The method produces strong constraints (68% confidence limits) on the density
parameter and weaker constraints on the cosmological constant
, which can be expressed by the relation . Independently of (in the range
), the constraint is .
Combination of the present results with SN Type Ia results yields (68%
confidence limits). This strongly supports the possibility that the observable
universe satisfies a nearly flat, perturbed
Friedmann-Lema\^{\i}tre-Robertson-Walker model, independently of any cosmic
microwave background observations.Comment: 15 pages, 15 figures; v2 has several minor modifications but
conclusions unchanged; accepted by Astronomy & Astrophysic
Efficient and Robust Signal Detection Algorithms for the Communication Applications
Signal detection and estimation has been prevalent in signal processing and communications for many years. The relevant studies deal with the processing of information-bearing signals for the purpose of information extraction. Nevertheless, new robust and efficient signal detection and estimation techniques are still in demand since there emerge more and more practical applications which rely on them. In this dissertation work, we proposed several novel signal detection schemes for wireless communications applications, such as source localization algorithm, spectrum sensing method, and normality test. The associated theories and practice in robustness, computational complexity, and overall system performance evaluation are also provided
Performance analysis and optimal selection of large mean-variance portfolios under estimation risk
We study the consistency of sample mean-variance portfolios of arbitrarily
high dimension that are based on Bayesian or shrinkage estimation of the input
parameters as well as weighted sampling. In an asymptotic setting where the
number of assets remains comparable in magnitude to the sample size, we provide
a characterization of the estimation risk by providing deterministic
equivalents of the portfolio out-of-sample performance in terms of the
underlying investment scenario. The previous estimates represent a means of
quantifying the amount of risk underestimation and return overestimation of
improved portfolio constructions beyond standard ones. Well-known for the
latter, if not corrected, these deviations lead to inaccurate and overly
optimistic Sharpe-based investment decisions. Our results are based on recent
contributions in the field of random matrix theory. Along with the asymptotic
analysis, the analytical framework allows us to find bias corrections improving
on the achieved out-of-sample performance of typical portfolio constructions.
Some numerical simulations validate our theoretical findings
Statistical Nested Sensor Array Signal Processing
Source number detection and direction-of-arrival (DOA) estimation are two major applications of sensor arrays. Both applications are often confined to the use of uniform linear arrays (ULAs), which is expensive and difficult to yield wide aperture. Besides, a ULA with N scalar sensors can resolve at most N − 1 sources. On the other hand, a systematic approach was recently proposed to achieve O(N 2 ) degrees of freedom (DOFs) using O(N) sensors based on a nested array, which is obtained by combining two or more ULAs with successively increased spacing.
This dissertation will focus on a fundamental study of statistical signal processing of nested arrays. Five important topics are discussed, extending the existing nested-array strategies to more practical scenarios. Novel signal models and algorithms are proposed.
First, based on the linear nested array, we consider the problem for wideband Gaussian sources. To employ the nested array to the wideband case, we propose effective strategies to apply nested-array processing to each frequency component, and combine all the spectral information of various frequencies to conduct the detection and estimation. We then consider the practical scenario with distributed sources, which considers the spreading phenomenon of sources.
Next, we investigate the self-calibration problem for perturbed nested arrays, for which existing works require certain modeling assumptions, for example, an exactly known array geometry, including the sensor gain and phase. We propose corresponding robust algorithms to estimate both the model errors and the DOAs. The partial Toeplitz structure of the covariance matrix is employed to estimate the gain errors, and the sparse total least squares is used to deal with the phase error issue.
We further propose a new class of nested vector-sensor arrays which is capable of significantly increasing the DOFs. This is not a simple extension of the nested scalar-sensor array. Both the signal model and the signal processing strategies are developed in the multidimensional sense. Based on the analytical results, we consider two main applications: electromagnetic (EM) vector sensors and acoustic vector sensors.
Last but not least, in order to make full use of the available limited valuable data, we propose a novel strategy, which is inspired by the jackknifing resampling method. Exploiting numerous iterations of subsets of the whole data set, this strategy greatly improves the results of the existing source number detection and DOA estimation methods
GAMMA-RAY IMAGING OBSERVATIONS OF THE CRAB AND CYGNUS REGIONS
This dissertation presents the results from a balloon-borne experiment, referred to as the Directional Gamma-Ray Telescope (DGT), which is designed to image celestial gamma-rays over the energy range 160 keV to 9.3 MeV. It utilizes a technique known as coded aperture imaging in order to obtain spatially resolved images of the sky with an angular resolution of 3.8\sp\circ. This detector is the first flight-ready instrument of this type operating at energies above 160 keV. The first successful balloon flight of this instrument took place on 1984 October 1-2. During the thirty hours in which the payload remained at float altitude, imaging observations of a number of sky regions were obtained, including observations of the Crab and Cygnus regions.
The Crab Nebula/pulsar was observed to have a featureless power-law spectrum with a best fit form of 5.1 \times 10\sp{-3} {\rm E\sb{MeV}}\sp{-1.88} photons {\rm cm\sp{-2}\ s\sp{-1}\ MeV}\sp{-1}, consistent with previous measurements. We have placed upper limits on previously observed line emission at energies of 400 keV and 1049 keV; the results are 3.0 \times 10\sp{-3} and 1.9 \times 10\sp{-3} photons {\rm cm\sp{-2}\ s}\sp{-1}, respectively. These upper limits lie below some previous measurements of this emission. We also place upper limits on the emission from the x-ray binary source A0535+26 and the anticenter diffuse emission.
Emission from Cyg X-1 was observed up to 10 MeV. At energies below 1 MeV, the data are consistent with a single-temperature inverse Compton model, with an electron temperature, {\rm kT\sb{e}}, of 80 keV and an optical depth, of 2.0. The inverse Compton model is often employed to explain the observed x-ray emission. In the 2-9.3 MeV range, the DGT results show emission which is not readily understood in the context of the inverse Compton model. We suggest that a second component, possibly produced by some non-thermal mechanism, may be necessary to explain the observations. Finally, upper limits are also derived for the flux from Cygnus X-3
- …