2,038 research outputs found

    Continuous-Time Minimum-Mean-Square-Error Filtering

    Get PDF

    The determination of gravity anomalies from geoid heights using the inverse Stokes' formula, Fourier transforms, and least squares collocation

    Get PDF
    A numerical method for the determination of gravity anomalies from geoid heights is described using the inverse Stokes formula. This discrete form of the inverse Stokes formula applies a numerical integration over the azimuth and an integration over a cubic interpolatory spline function which approximates the step function obtained from the numerical integration. The main disadvantage of the procedure is the lack of a reliable error measure. The method was applied on geoid heights derived from GEOS-3 altimeter measurements in the calibration area of the GEOS-3 satellite

    Optimal spectral reconstructions from deterministic and stochastic sampling geometries using compressive sensing and spectral statistical models

    Get PDF
    This dissertation focuses on the development of high-quality image reconstruction methods from a limited number of Fourier samples using optimized, stochastic and deterministic sampling geometries. Two methodologies are developed: an optimal image reconstruction framework based on Compressive Sensing (CS) techniques and a new, Spectral Statistical approach based on the use of isotropic models over a dyadic partitioning of the spectrum. The proposed methods are demonstrated in applications in reconstructing fMRI and remote sensing imagery. Typically, a reduction in MRI image acquisition time is achieved by sampling K-space at a rate below the Nyquist rate. Various methods using correlation between samples, sample averaging, and more recently, Compressive Sensing, are employed to mitigate the aliasing effects of under-sampled Fourier data. The proposed solution utilizes an additional layer of optimization to enhance the performance of a previously published CS reconstruction algorithm. Specifically, the new framework provides reconstructions of a desired image quality by jointly optimizing for the optimal K-space sampling geometry and CS model parameters. The effectiveness of each geometry is evaluated based on the required number of FFT samples that are available for image reconstructions of sufficient quality. A central result of this approach is that the fastest geometry, the spiral low-pass geometry has also provided the best (optimized) CS reconstructions. This geometry provided significantly better reconstructions than the stochastic sampling geometries recommended in the literature. An optimization framework for selecting appropriate CS model reconstruction parameters is also provided. Here, the term appropriate CS parameters\u27 is meant to infer that the estimated parameter ranges can provide some guarantee for a minimum level of image reconstruction performance. Utilizing the simplex search algorithm, the optimal TV-norm and Wavelet transform penalties are calculated for the CS reconstruction objective function. Collecting the functional evaluation values of the simplex search over a large data set allows for a range of objective function weighting parameters to be defined for the sampling geometries that were found to be effective. The results indicate that the CS parameter optimization framework is significant in that it can provide for large improvements over the standard use of non-optimized approaches. The dissertation also develops the use of a new Spectral Statistical approach for spectral reconstruction of remote sensing imagery. The motivation for pursuing this research includes potential applications that include, but are not limited to, the development of better image compression schemas based on a limited number of spectral coefficients. In addition, other applications include the use of spectral interpolation methods for remote sensing systems that directly sample the Fourier domain optically or electromagnetically, which may suffer from missing or degraded samples beyond and/or within the focal plane. For these applications, a new spectral statistical methodology is proposed that reconstructs spectral data from uniformly spaced samples over a dyadic partition of the spectrum. Unlike the CS approach that solves for the 2D FFT coefficients directly, the statistical approach uses separate models for the magnitude and phase, allowing for separate control of the reconstruction quality of each one. A scalable solution that partitions the spectral domain into blocks of varying size allows for the determination of the appropriate covariance models of the magnitude and phase spectra bounded by the blocks. The individual spectral models are then applied to solving for the optimal linear estimate, which is referred to in literature as Kriging. The use of spectral data transformations are also presented as a means for producing data that is better suited for statistical modeling and variogram estimation. A logarithmic transformation is applied to the magnitude spectra, as it has been shown to impart intrinsic stationarity over localized, bounded regions of the spectra. Phase spectra resulting from the 2D FFT can be best described as being uniformly distributed over the interval of -pi to pi. In this original state, the spectral samples fail to produce appropriate spectral statistical models that exhibit inter-sample covariance. For phase spectra modeling, an unwrapping step is required to ensure that individual blocks can be effectively modeled using appropriate variogram models. The transformed magnitude and unwrapped phase spectra result in unique statistical models that are optimal over individual frequency blocks, which produce accurate spectral reconstructions that account for localized variability in the spectral domain. The Kriging spectral estimates are shown to produce higher quality magnitude and phase spectra reconstructions than the cubic spline, nearest neighbor, and bilinear interpolators that are widely used. Even when model assumptions, such as isotropy, violate the spectral data being modeled, excellent reconstructions are still obtained. Finally, both of the spectral estimation methods developed in this dissertation are compared against one another, revealing how each one of the methods developed here is appropriate for different classes of images. For satellite images that contain a large amount of detail, the new spectral statistical approach, reconstructing the spectrum much faster, from a fraction of the original high frequency content, provided significantly better reconstructions than the best reconstructions from the optimized CS geometries. This result is supported not only by comparing image quality metrics, but also by visual assessment.\u2

    Signal estimation using H [infinity sign] criteria

    Get PDF
    In many signal processing and communication (SPC) applications we require to estimate signal corrupted by channel and additive noise. Optimal linear filters and predictors are used to recover signal from given observed (corrupted) signal. Kalman and Wiener filters are commonly used as optimal filters. These filters minimize the mean square error (MSE) or variance of the output error. The minimization require exact knowledge of input signal and noise power spectral density (PSD). Therefore, the performance of Kalman or Wiener filters degrade if the input signal and noise statistics is changing with time and is not known a priori. In many SPC applications there is no exact knowledge of the input signal and noise Statistics and Probability; One solution to this is to use the filters which minimizes MSE and adapt to changing input signals and noise Statistics and Probability; This solution falls into a general category of adaptive filters. Often, convergence speed of the adaptive filter algorithm determines the performance as it is assumed that the convergence speed is fast enough to track the changes in the input signal and noise Statistics and Probability; If the convergence speed is not able to track the input signal and noise statistics one can expect large variation in the output error power. Another approach to overcome unknown input signal and noise statistics is to use the mini-max estimation. One approach towards mini-max estimation is to minimize the error using H[infinity] criteria to obtain H[infinity] filters. This will lead to a conservative (minimize over the worst case input signals) design that is more robust to changes in the input signal and noise Statistics and Probability;;In this dissertation, interpretation of H[infinity] filters for zero mean stationary signals is discussed. From this H[infinity] filters are represented in the time and frequency domain. Performance benefits of H[infinity] filters over minimum variance filters are derived from this representation. Mathematical solutions to compute sub-optimal H[infinity] filters in time and frequency domain are discussed. Finally, performance benefits of H[infinity] filters for the code division multiple access (CDMA) system, signal estimation problems, and adaptive filters are shown through simulation results

    Employing data fusion & diversity in the applications of adaptive signal processing

    Get PDF
    The paradigm of adaptive signal processing is a simple yet powerful method for the class of system identification problems. The classical approaches consider standard one-dimensional signals whereby the model can be formulated by flat-view matrix/vector framework. Nevertheless, the rapidly increasing availability of large-scale multisensor/multinode measurement technology has render no longer sufficient the traditional way of representing the data. To this end, the author, who from this point onward shall be referred to as `we', `us', and `our' to signify the author myself and other supporting contributors i.e. my supervisor, my colleagues and other overseas academics specializing in the specific pieces of research endeavor throughout this thesis, has applied the adaptive filtering framework to problems that employ the techniques of data diversity and fusion which includes quaternions, tensors and graphs. At the first glance, all these structures share one common important feature: invertible isomorphism. In other words, they are algebraically one-to-one related in real vector space. Furthermore, it is our continual course of research that affords a segue of all these three data types. Firstly, we proposed novel quaternion-valued adaptive algorithms named the n-moment widely linear quaternion least mean squares (WL-QLMS) and c-moment WL-LMS. Both are as fast as the recursive-least-squares method but more numerically robust thanks to the lack of matrix inversion. Secondly, the adaptive filtering method is applied to a more complex task: the online tensor dictionary learning named online multilinear dictionary learning (OMDL). The OMDL is partly inspired by the derivation of the c-moment WL-LMS due to its parsimonious formulae. In addition, the sequential higher-order compressed sensing (HO-CS) is also developed to couple with the OMDL to maximally utilize the learned dictionary for the best possible compression. Lastly, we consider graph random processes which actually are multivariate random processes with spatiotemporal (or vertex-time) relationship. Similar to tensor dictionary, one of the main challenges in graph signal processing is sparsity constraint in the graph topology, a challenging issue for online methods. We introduced a novel splitting gradient projection into this adaptive graph filtering to successfully achieve sparse topology. Extensive experiments were conducted to support the analysis of all the algorithms proposed in this thesis, as well as pointing out potentials, limitations and as-yet-unaddressed issues in these research endeavor.Open Acces

    Low-Dissipation Simulation Methods and Models for Turbulent Subsonic Flow

    Get PDF
    The simulation of turbulent flows by means of computational fluid dynamics is highly challenging. The costs of an accurate direct numerical simulation (DNS) are usually too high, and engineers typically resort to cheaper coarse-grained models of the flow, such as large-eddy simulation (LES). To be suitable for the computation of turbulence, methods should not numerically dissipate the turbulent flow structures. Therefore, energy-conserving discretizations are investigated, which do not dissipate energy and are inherently stable because the discrete convective terms cannot spuriously generate kinetic energy. They have been known for incompressible flow, but the development of such methods for compressible flow is more recent. This paper will focus on the latter: LES and DNS for turbulent subsonic flow. A new theoretical framework for the analysis of energy conservation in compressible flow is proposed, in a mathematical notation of square-root variables, inner products, and differential operator symmetries. As a result, the discrete equations exactly conserve not only the primary variables (mass, momentum and energy), but also the convective terms preserve (secondary) discrete kinetic and internal energy. Numerical experiments confirm that simulations are stable without the addition of artificial dissipation. Next, minimum-dissipation eddy-viscosity models are reviewed, which try to minimize the dissipation needed for preventing sub-grid scales from polluting the numerical solution. A new model suitable for anisotropic grids is proposed: the anisotropic minimum-dissipation model. This model appropriately switches off for laminar and transitional flow, and is consistent with the exact sub-filter tensor on anisotropic grids. The methods and models are first assessed on several academic test cases: channel flow, homogeneous decaying turbulence and the temporal mixing layer. As a practical application, accurate simulations of the transitional flow over a delta wing have been performed
    corecore