90 research outputs found
Estimation of Sparse MIMO Channels with Common Support
We consider the problem of estimating sparse communication channels in the
MIMO context. In small to medium bandwidth communications, as in the current
standards for OFDM and CDMA communication systems (with bandwidth up to 20
MHz), such channels are individually sparse and at the same time share a common
support set. Since the underlying physical channels are inherently
continuous-time, we propose a parametric sparse estimation technique based on
finite rate of innovation (FRI) principles. Parametric estimation is especially
relevant to MIMO communications as it allows for a robust estimation and
concise description of the channels. The core of the algorithm is a
generalization of conventional spectral estimation methods to multiple input
signals with common support. We show the application of our technique for
channel estimation in OFDM (uniformly/contiguous DFT pilots) and CDMA downlink
(Walsh-Hadamard coded schemes). In the presence of additive white Gaussian
noise, theoretical lower bounds on the estimation of SCS channel parameters in
Rayleigh fading conditions are derived. Finally, an analytical spatial channel
model is derived, and simulations on this model in the OFDM setting show the
symbol error rate (SER) is reduced by a factor 2 (0 dB of SNR) to 5 (high SNR)
compared to standard non-parametric methods - e.g. lowpass interpolation.Comment: 12 pages / 7 figures. Submitted to IEEE Transactions on Communicatio
Modeling and Estimation for Real-Time Microarrays
Microarrays are used for collecting information about a large number of different genomic particles simultaneously. Conventional fluorescent-based microarrays acquire data after the hybridization phase. During this phase, the target analytes (e.g., DNA fragments) bind to the capturing probes on the array and, by the end of it, supposedly reach a steady state. Therefore, conventional microarrays attempt to detect and quantify the targets with a single data point taken in the steady state. On the other hand, a novel technique, the so-called real-time microarray, capable of recording the kinetics of hybridization in fluorescent-based microarrays has recently been proposed. The richness of the information obtained therein promises higher signal-to-noise ratio, smaller estimation error, and broader assay detection dynamic range compared to conventional microarrays. In this paper, we study the signal processing aspects of the real-time microarray system design. In particular, we develop a probabilistic model for real-time microarrays and describe a procedure for the estimation of target amounts therein. Moreover, leveraging on system identification ideas, we propose a novel technique for the elimination of cross hybridization. These are important steps toward developing optimal detection algorithms for real-time microarrays, and to understanding their fundamental limitations
Ship target recognition
Includes bibliographical references.In this report the classification of ship targets using a low resolution radar system is investigated. The thesis can be divided into two major parts. The first part summarizes research into the applications of neural networks to the low resolution non-cooperative ship target recognition problem. Three very different neural architectures are investigated and compared, namely; the Feedforward Network with Back-propagation, Kohonen's Supervised Learning Vector Quantization Network, and Simpson's Fuzzy Min-Max neural network. In all cases, pre-processing in the form of the Fourier-Modified Discrete Mellin Transform is used as a means of extracting feature vectors which are insensitive to the aspect angle of the radar. Classification tests are based on both simulated and real data. Classification accuracies of up to 93 are reported. The second part is of a purely investigative nature, and summarizes a body of research aimed at exploring new ground. The crux of this work is centered on the proposal to use synthetic range profiling in order to achieve a much higher range resolution (and hence better classification accuracies). Included in this work is a comprehensive investigation into the use of super-resolution and noise reducing eigendecomposition techniques. Algorithms investigated include the Principal Eigenvector Method, the Total Least Squares Method, and the MUSIC method. A final proposal for future research and development concerns the use of time domain averaging to improve the classification performance of the radar system. The use of an iterative correlation algorithm is investigated
Modeling and frequency tracking of marine mammal whistle calls
Submitted in partial fulfillment of the requirements for the degree of Master of Science at the Massachusetts Institute of Technology and the Woods Hole Oceanographic Institution February 2009Marine mammal whistle calls present an attractive medium for covert underwater
communications. High quality models of the whistle calls are needed in order to synthesize
natural-sounding whistles with embedded information. Since the whistle calls
are composed of frequency modulated harmonic tones, they are best modeled as a
weighted superposition of harmonically related sinusoids. Previous research with bottlenose
dolphin whistle calls has produced synthetic whistles that sound too “clean”
for use in a covert communications system. Due to the sensitivity of the human auditory
system, watermarking schemes that slightly modify the fundamental frequency
contour have good potential for producing natural-sounding whistles embedded with
retrievable watermarks. Structured total least squares is used with linear prediction
analysis to track the time-varying fundamental frequency and harmonic amplitude
contours throughout a whistle call. Simulation and experimental results demonstrate
the capability to accurately model bottlenose dolphin whistle calls and retrieve embedded
information from watermarked synthetic whistle calls. Different fundamental
frequency watermarking schemes are proposed based on their ability to produce natural
sounding synthetic whistles and yield suitable watermark detection and retrieval
Reconstructing polygons from moments with connections to array processing
Caption title.Includes bibliographical references (p. 24-26).Supported by the Office of Naval Research. N00014-91-J-1004 Supported by the US Army Research Office. DAAL03-92-G-0115 Supported by the National Science Foundation. MIP-9015281 Supported by the Clement Vaturi Fellowship in Biomedical Imaging Sciences at MIT.Peyman Milanfar ... [et al.]
Parameter Estimation for Superimposed Weighted Exponentials
The approach of modeling measured signals as superimposed exponentials in white Gaussian noise is popular and effective. However, estimating the parameters of the assumed model is challenging, especially when the data record length is short, the signal strength is low, or the parameters are closely spaced. In this dissertation, we first review the most effective parameter estimation scheme for the superimposed exponential model: maximum likelihood. We then provide a historical review of the linear prediction approach to parameter estimation for the same model. After identifying the improvements made to linear prediction and demonstrating their weaknesses, we introduce a completely tractable and statistically sound modification to linear prediction that we call iterative generalized least squares. It is shown, that our algorithm works to minimize the exact maximum likelihood cost function for the superimposed exponential problem and is therefore, equivalent to the previously developed maximum likelihood approach. However, our algorithm is indeed linear prediction, and thus revives a methodology previously categorized as inferior to maximum likelihood. With our modification, the insight provided by linear prediction can be carried to actual applications. We demonstrate this by developing an effective algorithm for deep level transient spectroscopy analysis. The signal of deep level transient spectroscopy is not a straight forward superposition of exponentials. However, with our methodology, an estimator, based on the exact maximum likelihood cost function for the actual signal, is quickly derived. At the end of the dissertation, we verify that our estimator extends the current capabilities of deep level transient spectroscopy analysis
Model estimation of cerebral hemodynamics between blood flow and volume changes: a data-based modeling approach
It is well known that there is a dynamic relationship between cerebral blood flow (CBF) and cerebral blood volume (CBV). With increasing applications of functional MRI, where the blood oxygen-level-dependent signals are recorded, the understanding and accurate modeling of the hemodynamic relationship between CBF and CBV becomes increasingly important. This study presents an empirical and data-based modeling framework for model identification from CBF and CBV experimental data. It is shown that the relationship between the changes in CBF and CBV can be described using a parsimonious autoregressive with exogenous input model structure. It is observed that neither the ordinary least-squares (LS) method nor the classical total least-squares (TLS) method can produce accurate estimates from the original noisy CBF and CBV data. A regularized total least-squares (RTLS) method is thus introduced and extended to solve such an error-in-the-variables problem. Quantitative results show that the RTLS method works very well on the noisy CBF and CBV data. Finally, a combination of RTLS with a filtering method can lead to a parsimonious but very effective model that can characterize the relationship between the changes in CBF and CBV
- …