1,234 research outputs found

    An Improved Phase Filter for Differential SAR Interferometry Based on an Iterative Method

    Get PDF
    Phase quality is a key element in the analysis of the deformation of the Earth's surface carried out with differential synthetic aperture radar interferometry. Various decorrelation sources may degrade the surface deformation estimates, and thus, phase filters are needed for this kind of application. The well-known Goldstein filter is the most widely used due to its simple implementation and computational efficiency. In the past years, improved filters have been proposed, which are based on this filter but introduce variations in the data processing. The effectiveness of these filters mostly depends on the size of the filtering window, the weight of the smoothed spectrum, and the kernel used to filter the spectrum. In this paper, we evaluate the performance of four of these filters and present a new method that outperforms all of them. The proposed filter is based on an iterative method in which the original phase is denoised progressively with adaptive filtering windows of different sizes. The effectiveness of the filter is controlled by the interferometric coherence, a direct indicator of the phase quality. Moreover, we introduce some modifications regarding the processing of the power spectrum. Specifically, we propose to smooth the original phase using a new filter which is based on a Chebyshev interpolation scheme. The performance of the new filter has been tested on both simulated and real interferograms, acquired by RADARSAT-2 and the Uninhabited Aerial Vehicle Synthetic Aperture Radar, which mapped two different geological events that caused surface deformation.This work was supported in part by the Spanish Ministry of Economy, Industry and Competitiveness, in part by the State Agency of Research (AEI), in part by the European Funds for Regional Development under Project TIN2014-55413-C2-2-P and Project TEC2017-85244-C2-1-P, in part by the U.K. Natural Environmental Research Council through the Looking Inside the Continents under Grant NE/K011006/1, in part by the Rapid deployment of a seismic array in Ecuador following the April 16th 2016 M7.8 Pedernales earthquake under Grant NE/P008828/1, and in part by the Centre for the Observation and Modelling of Earthquakes, Volcanoes and Tectonics under Grant COMET, GA/13/M/031

    Shape-Driven Interpolation With Discontinuous Kernels: Error Analysis, Edge Extraction, and Applications in Magnetic Particle Imaging

    Get PDF
    Accurate interpolation and approximation techniques for functions with discontinuities are key tools in many applications, such as medical imaging. In this paper, we study a radial basis function type of method for scattered data interpolation that incorporates discontinuities via a variable scaling function. For the construction of the discontinuous basis of kernel functions, information on the edges of the interpolated function is necessary. We characterize the native space spanned by these kernel functions and study error bounds in terms of the fill distance of the node set. To extract the location of the discontinuities, we use a segmentation method based on a classification algorithm from machine learning. The results of the conducted numerical experiments are in line with the theoretically derived convergence rates in case that the discontinuities are a priori known. Further, an application to interpolation in magnetic particle imaging shows that the presented method is very promising in order to obtain edge-preserving image reconstructions in which ringing artifacts are reduced

    Wavelet Analysis and Denoising: New Tools for Economists

    Get PDF
    This paper surveys the techniques of wavelets analysis and the associated methods of denoising. The Discrete Wavelet Transform and its undecimated version, the Maximum Overlapping Discrete Wavelet Transform, are described. The methods of wavelets analysis can be used to show how the frequency content of the data varies with time. This allows us to pinpoint in time such events as major structural breaks. The sparse nature of the wavelets representation also facilitates the process of noise reduction by nonlinear wavelet shrinkage , which can be used to reveal the underlying trends in economic data. An application of these techniques to the UK real GDP (1873-2001) is described. The purpose of the analysis is to reveal the true structure of the data - including its local irregularities and abrupt changes - and the results are surprising.Wavelets, Denoising, Structural breaks, Trend estimation

    Graph Filters for Signal Processing and Machine Learning on Graphs

    Full text link
    Filters are fundamental in extracting information from data. For time series and image data that reside on Euclidean domains, filters are the crux of many signal processing and machine learning techniques, including convolutional neural networks. Increasingly, modern data also reside on networks and other irregular domains whose structure is better captured by a graph. To process and learn from such data, graph filters account for the structure of the underlying data domain. In this article, we provide a comprehensive overview of graph filters, including the different filtering categories, design strategies for each type, and trade-offs between different types of graph filters. We discuss how to extend graph filters into filter banks and graph neural networks to enhance the representational power; that is, to model a broader variety of signal classes, data patterns, and relationships. We also showcase the fundamental role of graph filters in signal processing and machine learning applications. Our aim is that this article provides a unifying framework for both beginner and experienced researchers, as well as a common understanding that promotes collaborations at the intersections of signal processing, machine learning, and application domains

    Novel Digital Alias-Free Signal Processing Approaches to FIR Filtering Estimation

    Get PDF
    This thesis aims at developing a new methodology of filtering continuous-time bandlimited signals and piecewise-continuous signals from their discrete-time samples. Unlike the existing state-of-the-art filters, my filters are not adversely affected by aliasing, allowing the designers to flexibly select the sampling rates of the processed signal to reach the required accuracy of signal filtering rather than meeting stiff and often demanding constraints imposed by the classical theory of digital signal processing (DSP). The impact of this thesis is cost reduction of alias-free sampling, filtering and other digital processing blocks, particularly when the processed signals have sparse and unknown spectral support. Novel approaches are proposed which can mitigate the negative effects of aliasing, thanks to the use of nonuniform random/pseudorandom sampling and processing algorithms. As such, the proposed approaches belong to the family of digital alias-free signal processing (DASP). Namely, three main approaches are considered: total random (ToRa), stratified (StSa) and antithetical stratified (AnSt) random sampling techniques. First, I introduce a finite impulse response (FIR) filter estimator for each of the three considered techniques. In addition, a generalised estimator that encompasses the three filter estimators is also proposed. Then, statistical properties of all estimators are investigated to assess their quality. Properties such as expected value, bias, variance, convergence rate, and consistency are all inspected and unveiled. Moreover, closed-form mathematical expression is devised for the variance of each single estimator. Furthermore, quality assessment of the proposed estimators is examined in two main cases related to the smoothness status of the filter convolution’s integrand function, \u1d454(\u1d461,\u1d70f)∶=\u1d465(\u1d70f)ℎ(\u1d461−\u1d70f), and its first two derivatives. The first main case is continuous and differentiable functions \u1d454(\u1d461,\u1d70f), \u1d454′(\u1d461,\u1d70f), and \u1d454′′(\u1d461,\u1d70f). Whereas in the second main case, I cover all possible instances where some/all of such functions are piecewise-continuous and involving a finite number of bounded discontinuities. Primarily obtained results prove that all considered filter estimators are unbiassed and consistent. Hence, variances of the estimators converge to zero after certain number of sample points. However, the convergence rate depends on the selected estimator and which case of smoothness is being considered. In the first case (i.e. continuous \u1d454(\u1d461,\u1d70f) and its derivatives), ToRa, StSa and AnSt filter estimators converge uniformly at rates of \u1d441−1, \u1d441−3, and \u1d441−5 respectively, where 2\u1d441 is the total number of sample points. More interestingly, in the second main case, the convergence rates of StSa and AnSt estimators are maintained even if there are some discontinuities in the first-order derivative (FOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for StSa estimator) or in the second-order derivative (SOD) with respect to \u1d70f of \u1d454(\u1d461,\u1d70f) (for AnSt). Whereas these rates drop to \u1d441−2 and \u1d441−4 (for StSa and AnSt, respectively) if the zero-order derivative (ZOD) (for StSa) and FOD (for AnSt) are piecewise-continuous. Finally, if the ZOD of \u1d454(\u1d461,\u1d70f) is piecewise-continuous, then the uniform convergence rate of the AnSt estimator further drops to \u1d441−2. For practical reasons, I also introduce the utilisation of the three estimators in a special situation where the input signal is pseudorandomly sampled from otherwise uniform and dense grid. An FIR filter model with an oversampled finite-duration impulse response, timely aligned with the grid, is proposed and meant to be stored in a lookup table of the implemented filter’s memory to save processing time. Then, a synchronised convolution sum operation is conducted to estimate the filter output. Finally, a new unequally spaced Lagrange interpolation-based rule is proposed. The so-called composite 3-nonuniform-sample (C3NS) rule is employed to estimate area under the curve (AUC) of an integrand function rather than the simple Rectangular rule. I then carry out comparisons for the convergence rates of different estimators based on the two interpolation rules. The proposed C3NS estimator outperforms other Rectangular rule estimators on the expense of higher computational complexity. Of course, this extra cost could only be justifiable for some specific applications where more accurate estimation is required

    Hierarchical Estimation of Oceanic Surface Velocity Fields From Satellite Imagery.

    Get PDF
    Oceanic surface velocity fields are objectively estimated from time-sequential satellite images of sea-surface temperature from the Advanced Very High Resolution Radiometey on board the National Oceanic and Atmospheric Administration\u27s polar orbiters. The hierarchical technique uses the concept of image pyramids and multi-resolution grids for increased computational efficiency. Images are Gaussian filtered and sub-sampled from fine to coarse grid scales. The number of pyramid levels is selected such that the maximum expected velocity in the image results in a displacement of less than one pixel at the coarsest spatial scale. Maximum Cross-Correlation at the sub-pixel level with orthogonal polynomial approximation is used to compute a velocity field at each level of the pyramid which is then iterated assuming a locally linear velocity field. The first image at the next finer level of the pyramid is warped towards the second image by the calculated velocity field. At each succeeding finer grid scale, the velocity field is updated and the process repeated. The final result is an estimated velocity at each pixel at the finest resolution of the imagery. There are no free parameters as used in some gradient-based approaches and the only assumption is that the velocity field is locally linear. Test cases are shown using both simulated and real images with numerically simulated velocity fields which demonstrate the accuracy of the technique. Results are compared to gradient-based techniques using concepts of optical flow and projection onto convex sets and to the standard Maximum Cross-Correlation technique. The hierarchical computations for a real satellite image numerically advected by a rotational sheared flow recover the original field with a rms speed error of 12.6% and direction error of 4.9\sp\circ. Hierarchically-estimated velocity fields from real image pairs are compared to ground-truth estimates of the velocity from satellite-tracked drifters in the eastern Gulf of Mexico. Results indicate the technique underestimates daily mean buoy vector speeds, but with reasonably good direction. The problems of ground truth relations to hierarchically computed flows are discussed with regard to mismatches of time and space scales of measurement

    Accurate depth from defocus estimation with video-rate implementation

    Get PDF
    The science of measuring depth from images at video rate using „defocus‟ has been investigated. The method required two differently focussed images acquired from a single view point using a single camera. The relative blur between the images was used to determine the in-focus axial points of each pixel and hence depth. The depth estimation algorithm researched by Watanabe and Nayar was employed to recover the depth estimates, but the broadband filters, referred as the Rational filters were designed using a new procedure: the Two Step Polynomial Approach. The filters designed by the new model were largely insensitive to object texture and were shown to model the blur more precisely than the previous method. Experiments with real planar images demonstrated a maximum RMS depth error of 1.18% for the proposed filters, compared to 1.54% for the previous design. The researched software program required five 2D convolutions to be processed in parallel and these convolutions were effectively implemented on a FPGA using a two channel, five stage pipelined architecture, however the precision of the filter coefficients and the variables had to be limited within the processor. The number of multipliers required for each convolution was reduced from 49 to 10 (79.5% reduction) using a Triangular design procedure. Experimental results suggested that the pipelined processor provided depth estimates comparable in accuracy to the full precision Matlab‟s output, and generated depth maps of size 400 x 400 pixels in 13.06msec, that is faster than the video rate. The defocused images (near and far-focused) were optically registered for magnification using Telecentric optics. A frequency domain approach based on phase correlation was employed to measure the radial shifts due to magnification and also to optimally position the external aperture. The telecentric optics ensured pixel to pixel registration between the defocused images was correct and provided more accurate depth estimates

    An Implicit High-Order Spectral Difference Method for the Compressible Navier-Stokes Equations Using Adaptive Polynomial Refinement

    Get PDF
    A high/variable-order numerical simulation procedure for gas dynamics problems was developed to model steep grading physical phenomena. Higher order resolution was achieved using an orthogonal polynomial Gauss-Lobatto grid, adaptive polynomial refinement and artificial diffusion activated by a pressure switch. The method is designed to be computationally stable, accurate, and capable of resolving discontinuities and steep gradients without the use of one-sided reconstructions or reducing to low-order. Solutions to several benchmark gas-dynamics problems were produced including a shock-tube and a shock-entropy wave interaction. The scheme\u27s 1st-order solution was validated in comparison to a 1st-order Roe scheme solution. Higher-order solutions were shown to approach reference values for each problem. Uniform polynomial refinement was shown to be capable of producing increasingly accurate solutions on a very coarse mesh. Adaptive polynomial refinement was employed to selectively refine the solution near steep gradient structures and results were nearly identical to those produced by uniform polynomial refinement. Future work will focus on improvements to the diffusion term, complete extensions to the full compressible Navier-Stokes equations, and multi-dimension formulations
    • …
    corecore