246 research outputs found

    Fast Iterative-Interpolated DFT Phasor Estimator Considering Out-of-band Interference

    Get PDF
    For interpolated discrete Fourier transform (IpDFT)-based phasor estimators, the out-of-band interference (OOBI) test is among the most challenging ones. The typical iterative-interpolated DFT (i-IpDFT) phasor estimator utilizes a two-step iterative framework to eliminate the effects of the negative frequency and OOBI. However, the speed of estimation is limited by the adopted frequency estimator and the redundant iterations. To this end, this article proposes a fast i-IpDFT (FiIpDFT) method for the phasor estimation of an OOBI contaminated signal, which utilizes the three-point IpDFT (I3pDFT) technique. The proposed method first applies a noniterative frequency, amplitude, and phase estimator to eliminate the negative frequency interference. Then, a straightforward formula and two-stop criterion are introduced to reduce the computational burden of the OOBI elimination process. The accuracy and effectiveness of the proposed FiIpDFT method are validated by simulations. These are designed, under steady and dynamic conditions, according to the requirements of the Standard IEC/IEEE 60255-118-1

    Interferometric Synthetic Aperture Sonar Signal Processing for Autonomous Underwater Vehicles Operating Shallow Water

    Get PDF
    The goal of the research was to develop best practices for image signal processing method for InSAS systems for bathymetric height determination. Improvements over existing techniques comes from the fusion of Chirp-Scaling a phase preserving beamforming techniques to form a SAS image, an interferometric Vernier method to unwrap the phase; and confirming the direction of arrival with the MUltiple SIgnal Channel (MUSIC) estimation technique. The fusion of Chirp-Scaling, Vernier, and MUSIC lead to the stability in the bathymetric height measurement, and improvements in resolution. This method is computationally faster, and used less memory then existing techniques

    Maximum-likelihood estimation of lithospheric flexural rigidity, initial-loading fraction, and load correlation, under isotropy

    Full text link
    Topography and gravity are geophysical fields whose joint statistical structure derives from interface-loading processes modulated by the underlying mechanics of isostatic and flexural compensation in the shallow lithosphere. Under this dual statistical-mechanistic viewpoint an estimation problem can be formulated where the knowns are topography and gravity and the principal unknown the elastic flexural rigidity of the lithosphere. In the guise of an equivalent "effective elastic thickness", this important, geographically varying, structural parameter has been the subject of many interpretative studies, but precisely how well it is known or how best it can be found from the data, abundant nonetheless, has remained contentious and unresolved throughout the last few decades of dedicated study. The popular methods whereby admittance or coherence, both spectral measures of the relation between gravity and topography, are inverted for the flexural rigidity, have revealed themselves to have insufficient power to independently constrain both it and the additional unknown initial-loading fraction and load-correlation fac- tors, respectively. Solving this extremely ill-posed inversion problem leads to non-uniqueness and is further complicated by practical considerations such as the choice of regularizing data tapers to render the analysis sufficiently selective both in the spatial and spectral domains. Here, we rewrite the problem in a form amenable to maximum-likelihood estimation theory, which we show yields unbiased, minimum-variance estimates of flexural rigidity, initial-loading frac- tion and load correlation, each of those separably resolved with little a posteriori correlation between their estimates. We are also able to separately characterize the isotropic spectral shape of the initial loading processes.Comment: 41 pages, 13 figures, accepted for publication by Geophysical Journal Internationa

    Signal Processing for Synthetic Aperture Sonar Image Enhancement

    Get PDF
    This thesis contains a description of SAS processing algorithms, offering improvements in Fourier-based reconstruction, motion-compensation, and autofocus. Fourier-based image reconstruction is reviewed and improvements shown as the result of improved system modelling. A number of new algorithms based on the wavenumber algorithm for correcting second order effects are proposed. In addition, a new framework for describing multiple-receiver reconstruction in terms of the bistatic geometry is presented and is a useful aid to understanding. Motion-compensation techniques for allowing Fourier-based reconstruction in widebeam geometries suffering large-motion errors are discussed. A motion-compensation algorithm exploiting multiple receiver geometries is suggested and shown to provide substantial improvement in image quality. New motion compensation techniques for yaw correction using the wavenumber algorithm are discussed. A common framework for describing phase estimation is presented and techniques from a number of fields are reviewed within this framework. In addition a new proof is provided outlining the relationship between eigenvector-based autofocus phase estimation kernels and the phase-closure techniques used astronomical imaging. Micronavigation techniques are reviewed and extensions to the shear average single-receiver micronavigation technique result in a 3 - 4 fold performance improvement when operating on high-contrast images. The stripmap phase gradient autofocus (SPGA) algorithm is developed and extends spotlight SAR PGA to the wide-beam, wide-band stripmap geometries common in SAS imaging. SPGA supersedes traditional PGA-based stripmap autofocus algorithms such as mPGA and PCA - the relationships between SPGA and these algorithms is discussed. SPGA's operation is verified on simulated and field-collected data where it provides significant image improvement. SPGA with phase-curvature based estimation is shown and found to perform poorly compared with phase-gradient techniques. The operation of SPGA on data collected from Sydney Harbour is shown with SPGA able to improve resolution to near the diffraction-limit. Additional analysis of practical stripmap autofocus operation in presence of undersampling and space-invariant blurring is presented with significant comment regarding the difficulties inherent in autofocusing field-collected data. Field-collected data from trials in Sydney Harbour is presented along with associated autofocus results from a number of algorithms

    Probabilistic modeling for single-photon lidar

    Full text link
    Lidar is an increasingly prevalent technology for depth sensing, with applications including scientific measurement and autonomous navigation systems. While conventional systems require hundreds or thousands of photon detections per pixel to form accurate depth and reflectivity images, recent results for single-photon lidar (SPL) systems using single-photon avalanche diode (SPAD) detectors have shown accurate images formed from as little as one photon detection per pixel, even when half of those detections are due to uninformative ambient light. The keys to such photon-efficient image formation are two-fold: (i) a precise model of the probability distribution of photon detection times, and (ii) prior beliefs about the structure of natural scenes. Reducing the number of photons needed for accurate image formation enables faster, farther, and safer acquisition. Still, such photon-efficient systems are often limited to laboratory conditions more favorable than the real-world settings in which they would be deployed. This thesis focuses on expanding the photon detection time models to address challenging imaging scenarios and the effects of non-ideal acquisition equipment. The processing derived from these enhanced models, sometimes modified jointly with the acquisition hardware, surpasses the performance of state-of-the-art photon counting systems. We first address the problem of high levels of ambient light, which causes traditional depth and reflectivity estimators to fail. We achieve robustness to strong ambient light through a rigorously derived window-based censoring method that separates signal and background light detections. Spatial correlations both within and between depth and reflectivity images are encoded in superpixel constructions, which fill in holes caused by the censoring. Accurate depth and reflectivity images can then be formed with an average of 2 signal photons and 50 background photons per pixel, outperforming methods previously demonstrated at a signal-to-background ratio of 1. We next approach the problem of coarse temporal resolution for photon detection time measurements, which limits the precision of depth estimates. To achieve sub-bin depth precision, we propose a subtractively-dithered lidar implementation, which uses changing synchronization delays to shift the time-quantization bin edges. We examine the generic noise model resulting from dithering Gaussian-distributed signals and introduce a generalized Gaussian approximation to the noise distribution and simple order statistics-based depth estimators that take advantage of this model. Additional analysis of the generalized Gaussian approximation yields rules of thumb for determining when and how to apply dither to quantized measurements. We implement a dithered SPL system and propose a modification for non-Gaussian pulse shapes that outperforms the Gaussian assumption in practical experiments. The resulting dithered-lidar architecture could be used to design SPAD array detectors that can form precise depth estimates despite relaxed temporal quantization constraints. Finally, SPAD dead time effects have been considered a major limitation for fast data acquisition in SPL, since a commonly adopted approach for dead time mitigation is to operate in the low-flux regime where dead time effects can be ignored. We show that the empirical distribution of detection times converges to the stationary distribution of a Markov chain and demonstrate improvements in depth estimation and histogram correction using our Markov chain model. An example simulation shows that correctly compensating for dead times in a high-flux measurement can yield a 20-times speed up of data acquisition. The resulting accuracy at high photon flux could enable real-time applications such as autonomous navigation

    A Study of Myoelectric Signal Processing

    Get PDF
    This dissertation of various aspects of electromyogram (EMG: muscle electrical activity) signal processing is comprised of two projects in which I was the lead investigator and two team projects in which I participated. The first investigator-led project was a study of reconstructing continuous EMG discharge rates from neural impulses. Related methods for calculating neural firing rates in other contexts were adapted and applied to the intramuscular motor unit action potential train firing rate. Statistical results based on simulation and clinical data suggest that performances of spline-based methods are superior to conventional filter-based methods in the absence of decomposition error, but they unacceptably degrade in the presence of even the smallest decomposition errors present in real EMG data, which is typically around 3-5%. Optimal parameters for each method are found, and with normal decomposition error rates, ranks of these methods with their optimal parameters are given. Overall, Hanning filtering and Berger methods exhibit consistent and significant advantages over other methods. In the second investigator-led project, the technique of signal whitening was applied prior to motion classification of upper limb surface EMG signals previously collected from the forearm muscles of intact and amputee subjects. The motions classified consisted of 11 hand and wrist actions pertaining to prosthesis control. Theoretical models and experimental data showed that whitening increased EMG signal bandwidth by 65-75% and the coefficients of variation of temporal features computed from the EMG were reduced. As a result, a consistent classification accuracy improvement of 3-5% was observed for all subjects at small analysis durations (\u3c 100 ms). In the first team-based project, advanced modeling methods of the constant posture EMG-torque relationship about the elbow were studied: whitened and multi-channel EMG signals, training set duration, regularized model parameter estimation and nonlinear models. Combined, these methods reduced error to less than a quarter of standard techniques. In the second team-based project, a study related biceps-triceps surface EMG to elbow torque at seven joint angles during constant-posture contractions. Models accounting for co-contraction estimated that individual flexion muscle torques were much higher than models that did not account for co-contraction

    Acoustic source localization : exploring theory and practice

    Get PDF
    Over the past few decades, noise pollution became an important issue in modern society. This has led to an increased effort in the industry to reduce noise. Acoustic source localization methods determine the location and strength of the vibrations which are the cause of sound based onmeasurements of the sound field. This thesis describes a theoretical study of many facets of the acoustic source localization problem as well as the development, implementation and validation of new source localization methods. The main objective is to increase the range of applications of inverse acoustics and to develop accurate and computationally efficient methods for each of these applications. Four applications are considered. Firstly, the inverse acoustic problem is considered where the source and the measurement points are located on two parallel planes. A new fast method to solve this problem is developed and it is compared to the existing method planar nearfield acoustic holography (PNAH) from a theoretical point of view, as well as by means of simulations and experiments. Both methods are fast but the newmethod yields more robust and accurate results. Secondly, measurements in inverse acoustics are often point-by-point or full array measurements. However a straightforward and cost-effective alternative to these approaches is a sensor or array which moves through the sound field during the measurement to gather sound field information. The same numerical techniques make it possible to apply inverse acoustics to the case where the source moves and the sensors are fixed in space. It is shown that the inverse methods such as the inverse boundary element method (IBEM) can be applied to this problem. To arrive at an accurate representation of the sound field, an optimized signal processing method is applied and it is shown experimentally that this method leads to accurate results. Thirdly, a theoretical framework is established for the inverse acoustical problem where the sound field and the source are represented by a cross-spectral matrix. This problem is important in inverse acoustics because it occurs in the inverse calculation of sound intensity. The existing methods for this problem are analyzed from a theoretical point of view using this framework and a new method is derived from it. A simulation study indicates that the new method improves the results by 30% in some cases and the results are similar otherwise. Finally, the localization of point sources in the acoustic near field is considered. MUltiple SIgnal Classification (MUSIC) is newly applied to the Boundary element method (BEM) for this purpose. It is shown that this approach makes it possible to localize point sources accurately even if the noise level is extremely high or if the number of sensors is low

    Target Recognition Using Late-Time Returns from Ultra-Wideband, Short-Pulse Radar

    Get PDF
    The goal of this research is to develop algorithms that recognize targets by exploiting properties in the late-time resonance induced by ultra-wide band radar signals. A new variant of the Matrix Pencil Method algorithm is developed that identifies complex resonant frequencies present in the scattered signal. Kalman filters are developed to represent the dynamics of the signals scattered from several target types. The Multiple Model Adaptive Estimation algorithm uses the Kalman filters to recognize targets. The target recognition algorithm is shown to be successful in the presence of noise. The performance of the new algorithms is compared to that of previously published algorithms

    Search for Gravitational-Wave Bursts in the First Year of the Fifth LIGO Science Run

    Get PDF
    We present the results obtained from an all-sky search for gravitational-wave (GW) bursts in the 64-2000 Hz frequency range in data collected by the LIGO detectors during the first year (November 2005-November 2006) of their fifth science run. The total analyzed live time was 268.6 days. Multiple hierarchical data analysis methods were invoked in this search. The overall sensitivity expressed in terms of the root-sum-square (rss) strain amplitude hrss for gravitational-wave bursts with various morphologies was in the range of 6 x 10-22 Hz -1/2 to a few x 10-21Hz-1/2. No GW signals were observed and a frequentist upper limit of 3.75 events per year on the rate of strong GW bursts was placed at the 90% confidence level. As in our previous searches, we also combined this rate limit with the detection efficiency for selected waveform morphologies to obtain event rate versus strength exclusion curves. In sensitivity, these exclusion curves are the most stringent to date
    corecore