1,759 research outputs found

    SAR Tomography via Nonlinear Blind Scatterer Separation

    Get PDF
    Layover separation has been fundamental to many synthetic aperture radar applications, such as building reconstruction and biomass estimation. Retrieving the scattering profile along the mixed dimension (elevation) is typically solved by inversion of the SAR imaging model, a process known as SAR tomography. This paper proposes a nonlinear blind scatterer separation method to retrieve the phase centers of the layovered scatterers, avoiding the computationally expensive tomographic inversion. We demonstrate that conventional linear separation methods, e.g., principle component analysis (PCA), can only partially separate the scatterers under good conditions. These methods produce systematic phase bias in the retrieved scatterers due to the nonorthogonality of the scatterers' steering vectors, especially when the intensities of the sources are similar or the number of images is low. The proposed method artificially increases the dimensionality of the data using kernel PCA, hence mitigating the aforementioned limitations. In the processing, the proposed method sequentially deflates the covariance matrix using the estimate of the brightest scatterer from kernel PCA. Simulations demonstrate the superior performance of the proposed method over conventional PCA-based methods in various respects. Experiments using TerraSAR-X data show an improvement in height reconstruction accuracy by a factor of one to three, depending on the used number of looks.Comment: This work has been accepted by IEEE TGRS for publicatio

    Array processing based on time-frequency analysis and higher-order statistics

    Get PDF
    Ph.DDOCTOR OF PHILOSOPH

    Discrete Wavelet Transforms

    Get PDF
    The discrete wavelet transform (DWT) algorithms have a firm position in processing of signals in several areas of research and industry. As DWT provides both octave-scale frequency and spatial timing of the analyzed signal, it is constantly used to solve and treat more and more advanced problems. The present book: Discrete Wavelet Transforms: Algorithms and Applications reviews the recent progress in discrete wavelet transform algorithms and applications. The book covers a wide range of methods (e.g. lifting, shift invariance, multi-scale analysis) for constructing DWTs. The book chapters are organized into four major parts. Part I describes the progress in hardware implementations of the DWT algorithms. Applications include multitone modulation for ADSL and equalization techniques, a scalable architecture for FPGA-implementation, lifting based algorithm for VLSI implementation, comparison between DWT and FFT based OFDM and modified SPIHT codec. Part II addresses image processing algorithms such as multiresolution approach for edge detection, low bit rate image compression, low complexity implementation of CQF wavelets and compression of multi-component images. Part III focuses watermaking DWT algorithms. Finally, Part IV describes shift invariant DWTs, DC lossless property, DWT based analysis and estimation of colored noise and an application of the wavelet Galerkin method. The chapters of the present book consist of both tutorial and highly advanced material. Therefore, the book is intended to be a reference text for graduate students and researchers to obtain state-of-the-art knowledge on specific applications

    Estimation et reconstruction des signaux courts multicomposantes modulées non-linéairement à la fois en amplitude et en fréquence

    Get PDF
    - Dans ce papier, nous considérons des signaux, courts, non-stationnaires, modulés non linéairement en amplitude et en fréquence. Nous étendons une approche locale, développée dans le cadre des signaux monocomposante et dont l'efficacité et la robustesse ont été prouvées [3, 4]. Nous utilisons un modèle polynomial pour la fréquence et l'amplitude instantanées (FI / AI). Les paramètres du modèle sont ensuite estimés en maximisant la Vraisemblance par une technique d'optimisation stochastique : le Recuit Simulé. Basés sur la même démarche et dans le contexte des signaux multicomposantes, nous comparons deux approches différentes. Une première approche, optimale, et qui s'avère coûteuse en temps de calcul, consiste à estimer tous les paramètres du modèle à la fois. La deuxième approche, sous-optimale, reconstruit itérativement le signal composante par composante. Des simulations de Monte Carlo et une comparaison avec les Bornes de Cramer Rao illustrant les bonnes performances seront présentées. Nous obtenons une bonne estimation dans le cas où les fréquences instantanées se croisent, ce qui constitue une bonne performance compte tenu du faible nombre d'échantillons. Les deux approches sont testées ensuite sur des données réelles

    Sparsity and convex programming in time-frequency processing

    Get PDF
    Cataloged from PDF version.Thesis (Ph.D.): Bilkent University, The Department of Electrical and Electronics Engineering and The Graduate School of Engineering and Science of Bilkent Univesity, 2014.Includes bibliographical references (leaves 120-131).In this thesis sparsity and convex programming-based methods for timefrequency (TF) processing are developed. The proposed methods aim to obtain high resolution and cross-term free TF representations using sparsity and lifted projections. A crucial aspect of Time-Frequency (TF) analysis is the identification of separate components in a multi component signal. Wigner-Ville distribution is the classical tool for representing such signals but suffers from cross-terms. Other methods that are members of Cohen’s class distributions also aim to remove the cross terms by masking the Ambiguity Function (AF) but they result in reduced resolution. Most practical signals with time-varying frequency content are in the form of weighted trajectories on the TF plane and many others are sparse in nature. Therefore the problem can be cast as TF distribution reconstruction using a subset of AF domain coefficients and sparsity assumption in TF domain. Sparsity can be achieved by constraining or minimizing the l1 norm. Projections Onto Convex Sets (POCS) based l1 minimization approach is proposed to obtain a high resolution, cross-term free TF distribution. Several AF domain constraint sets are defined for TF reconstruction. Epigraph set of l1 norm, real part of AF and phase of AF are used during the iterative estimation process. A new kernel estimation method based on a single projection onto the epigraph set of l1 ball in TF domain is also proposed. The kernel based method obtains the TF representation in a faster way than the other optimization based methods. Component estimation from a multicomponent time-varying signal is considered using TF distribution and parametric maximum likelihood (ML) estimation. The initial parameters are obtained via time-frequency techniques. A method, which iterates amplitude and phase parameters separately, is proposed. The method significantly reduces the computational complexity and convergence time.by Zeynel Deprem.Ph.D

    Radio Channel Prediction Based on Parametric Modeling

    Get PDF
    Long range channel prediction is a crucial technology for future wireless communications. The prediction of Rayleigh fading channels is studied in the frame of parametric modeling in this thesis. Suggested by the Jakes model for Rayleigh fading channels, deterministic sinusoidal models were adopted for long range channel prediction in early works. In this thesis, a number of new channel predictors based on stochastic sinusoidal modeling are proposed. They are termed conditional and unconditional LMMSE predictors respectively. Given frequency estimates, the amplitudes of the sinusoids are modeled as Gaussian random variables in the conditional LMMSE predictors, and both the amplitudes and frequency estimates are modeled as Gaussian random variables in the unconditional LMMSE predictors. It was observed that a part of the channels cannot be described by the periodic sinusoidal bases, both in simulations and measured channels. To pick up this un-modeled residual signal, an adjusted conditional LMMSE predictor and a Joint LS predictor are proposed. Motivated by the analysis of measured channels and recently published physics based scattering SISO and MIMO channel models, a new approach for channel prediction based on non-stationary Multi-Component Polynomial Phase Signal (MC-PPS) is further proposed. The so-called LS MC-PPS predictor models the amplitudes of the PPS components as constants. In the case of MC-PPS with time-varying amplitudes, an adaptive channel predictor using the Kalman filter is suggested, where the time-varying amplitudes are modeled as auto-regressive processes. An iterative detection and estimation method of the number of PPS components and the orders of polynomial phases is also proposed. The parameter estimation is based on the Nonlinear LS (NLLS) and the Nonlinear Instantaneous LS (NILS) criteria, corresponding to the cases of constant and time-varying amplitudes, respectively. The performance of the proposed channel predictors is evaluated using both synthetic signals and measured channels. High order polynomial phase parameters are observed in both urban and suburban environments. It is observed that the channel predictors based on the non-stationary MC-PPS models outperform the other predictors in Monte Carlo simulations and examples of measured urban and suburban channels

    Multirate Frequency Transformations: Wideband AM-FM Demodulation with Applications to Signal Processing and Communications

    Get PDF
    The AM-FM (amplitude & frequency modulation) signal model finds numerous applications in image processing, communications, and speech processing. The traditional approaches towards demodulation of signals in this category are the analytic signal approach, frequency tracking, or the energy operator approach. These approaches however, assume that the amplitude and frequency components are slowly time-varying, e.g., narrowband and incur significant demodulation error in the wideband scenarios. In this thesis, we extend a two-stage approach towards wideband AM-FM demodulation that combines multirate frequency transformations (MFT) enacted through a combination of multirate systems with traditional demodulation techniques, e.g., the Teager-Kasiser energy operator demodulation (ESA) approach to large wideband to narrowband conversion factors. The MFT module comprises of multirate interpolation and heterodyning and converts the wideband AM-FM signal into a narrowband signal, while the demodulation module such as ESA demodulates the narrowband signal into constituent amplitude and frequency components that are then transformed back to yield estimates for the wideband signal. This MFT-ESA approach is then applied to the various problems of: (a) wideband image demodulation and fingerprint demodulation, where multidimensional energy separation is employed, (b) wideband first-formant demodulation in vowels, and (c) wideband CPM demodulation with partial response signaling, to demonstrate its validity in both monocomponent and multicomponent scenarios as an effective multicomponent AM-FM signal demodulation and analysis technique for image processing, speech processing, and communications based applications

    Numerical solution of multicomponent population balance systems with applications to particulate processes

    Get PDF
    Thesis (Ph. D.)--Massachusetts Institute of Technology, Dept. of Chemical Engineering, 2001."June 2001."Includes bibliographical references.Population balances describe a wide variety of processes in the chemical industry and environment ranging from crystallization to atmospheric aerosols, yet the dynamics of these processes are poorly understood. A number of different mechanisms, including growth, nucleation, coagulation, and fragmentation typically drive the dynamics of population balance systems. Measurement methods are not capable of collecting data at resolutions which can explain the interactions of these processes. In order to better understand particle formation mechanisms, numerical solutions could be employed, however current numerical solutions are generally restricted to a either a limited selection of growth laws or a limited solution range. This lack of modeling ability precludes the accurate and/or fast solution of the entire class of problems involving simultaneous nucleation and growth. Using insights into the numerical stability limits of the governing equations for growth, it is possible to develop new methods which reduce solution times while expanding the solution range to include many orders of magnitude in particle size. Rigorous derivation of the representations and governing equations is presented for both single and multi-component population balance systems involving growth, coagulation, fragmentation, and nucleation sources. A survey of the representations used in numerical implementations is followed by an analysis of model complexity as new components are added. The numerical implementation of a split composition distribution method for multicomponent systems is presented, and the solution is verified against analytical results. Numerical stability requirements under varying growth rate laws are used to develop new scaling methods which enable the description of particles over many orders of magnitude in size. Numerous examples are presented to illustrate the utility of these methods and to familiarize the reader with the development and manipulations of the representations, governing equations, and numerical implementations of population balance systems.by Darren Donald Obrigkeit.Ph.D

    Dynamic modelling of electronic nose systems

    Get PDF
    This thesis details research into the modelling of the dynamic responses of electronic nose systems to odour inputs. Most electronic nose systems contain an array of between 4 and 32 odour sensors, each of which respond in varying degrees to a range of different gaseous stimuli. In almost all electronic nose systems in use today, the steady-state responses of the odour sensors are extracted and passed to one of a variety of pattern recognition systems. The primary aim of this thesis is to investigate the use of information contained within the dynamic portion of the sensor response for odour classification. System identification techniques using linear time-invariant black box models are applied to both extracted steady state and full dynamic data sets collected from experiments designed to assess the ability of an electronic nose system to discriminate between the strain and growth phases of samples of cyanobacteria (blue-green algae). The results obtained are compared with those obtained elsewhere using the same data, analysed with nonlinear artificial neural networks. A physical model for the electrochemical mechanisms resulting in the measured responses is translated into a mathematical model. This model consists of a system of coupled nonlinear ordinary differential equations. The model is analysed, and the theoretical structural identifiability of the model is investigated and established. The parametric model is then fitted to data collected from experiments with simple (single chemical species) odours. An odour discrimination method is developed, based upon the extraction of physically significant parameters from experimental data. This technique is evaluated and compared with the previously explored black box modelling techniques. The discrimination technique is then extended to the analysis of complex odours, again using the cyanobacteria data sets. Successful classification rates are compared with those obtained earlier in the thesis, and elsewhere with neural networks applied to steady state data

    Enhancing Missing Data Imputation of Non-stationary Signals with Harmonic Decomposition

    Full text link
    Dealing with time series with missing values, including those afflicted by low quality or over-saturation, presents a significant signal processing challenge. The task of recovering these missing values, known as imputation, has led to the development of several algorithms. However, we have observed that the efficacy of these algorithms tends to diminish when the time series exhibit non-stationary oscillatory behavior. In this paper, we introduce a novel algorithm, coined Harmonic Level Interpolation (HaLI), which enhances the performance of existing imputation algorithms for oscillatory time series. After running any chosen imputation algorithm, HaLI leverages the harmonic decomposition based on the adaptive nonharmonic model of the initial imputation to improve the imputation accuracy for oscillatory time series. Experimental assessments conducted on synthetic and real signals consistently highlight that HaLI enhances the performance of existing imputation algorithms. The algorithm is made publicly available as a readily employable Matlab code for other researchers to use
    corecore