48 research outputs found

    Signal Reconstruction From Nonuniform Samples Using Prolate Spheroidal Wave Functions: Theory and Application

    Get PDF
    Nonuniform sampling occurs in many applications due to imperfect sensors, mismatchedclocks or event-triggered phenomena. Indeed, natural images, biomedical responses andsensor network transmission have bursty structure so in order to obtain samples that correspondto the information content of the signal, one needs to collect more samples when thesignal changes fast and fewer samples otherwise which creates nonuniformly distibuted samples.On the other hand, with the advancements in the integrated circuit technology, smallscale and ultra low-power devices are available for several applications ranging from invasivebiomedical implants to environmental monitoring. However the advancements in the devicetechnologies also require data acquisition methods to be changed from the uniform (clockbased, synchronous) to nonuniform (clockless, asynchronous) processing. An important advancementis in the data reconstruction theorems from sub-Nyquist rate samples which wasrecently introduced as compressive sensing and that redenes the uncertainty principle. Inthis dissertation, we considered the problem of signal reconstruction from nonuniform samples.Our method is based on the Prolate Spheroidal Wave Functions (PSWF) which can beused in the reconstruction of time-limited and essentially band-limited signals from missingsamples, in event-driven sampling and in the case of asynchronous sigma delta modulation.We provide an implementable, general reconstruction framework for the issues relatedto reduction in the number of samples and estimation of nonuniform sample times. We alsoprovide a reconstruction method for level crossing sampling with regularization. Another way is to use projection onto convex sets (POCS) method. In this method we combinea time-frequency approach with the POCS iterative method and use PSWF for the reconstructionwhen there are missing samples. Additionally, we realize time decoding modulationfor an asynchronous sigma delta modulator which has potential applications in low-powerbiomedical implants

    Least-Squares Wavelet Analysis and Its Applications in Geodesy and Geophysics

    Get PDF
    The Least-Squares Spectral Analysis (LSSA) is a robust method of analyzing unequally spaced and non-stationary data/time series. Although this method takes into account the correlation among the sinusoidal basis functions of irregularly spaced series, its spectrum still shows spectral leakage: power/energy leaks from one spectral peak into another. An iterative method called AntiLeakage Least-Squares Spectral Analysis (ALLSSA) is developed to attenuate the spectral leakages in the spectrum and consequently is used to regularize data series. In this study, the ALLSSA is applied to regularize and attenuate random noise in seismic data down to a certain desired level. The ALLSSA is subsequently extended to multichannel, heterogeneous and coarsely sampled seismic and related gradient measurements intended for geophysical exploration applications that require regularized (equally spaced) data free from aliasing effects. A new and robust method of analyzing unequally spaced and non-stationary time/data series is rigorously developed. This method, namely, the Least-Squares Wavelet Analysis (LSWA), is a natural extension of the LSSA that decomposes a time series into the time-frequency domain and obtains its spectrogram. It is shown through many synthetic and experimental time/data series that the LSWA supersedes all state-of-the-art spectral analyses methods currently available, without making any assumptions about or preprocessing (editing) the time series, or even applying any empirical methods that aim to adapt a time series to the analysis method. The LSWA can analyze any non-stationary and unequally spaced time series with components of low or high amplitude and frequency variability over time, including datum shifts (offsets), trends, and constituents of known forms, and by taking into account the covariance matrix associated with the time series. The stochastic confidence level surface for the spectrogram is rigorously derived that identifies statistically significant peaks in the spectrogram at a certain confidence level; this supersedes the empirical cone of influence used in the most popular continuous wavelet transform. All current state-of-the-art cross-wavelet transforms and wavelet coherence analyses methods impose many stringent constraints on the properties of the time series under investigation, requiring, more often than not, preprocessing of the raw measurements that may distort their content. These methods cannot generally be used to analyze unequally spaced and non-stationary time series or even two equally spaced time series of different sampling rates, with trends and/or datum shifts, and with associated covariance matrices. To overcome the stringent requirements of these methods, a new method is developed, namely, the Least-Squares Cross-Wavelet Analysis (LSCWA), along with its statistical distribution that requires no assumptions on the series under investigation. Numerous synthetic and geoscience examples establish the LSCWA as the method of methods for rigorous coherence analysis of any experimental series

    Mathematical Model Development of Super-Resolution Image Wiener Restoration

    Get PDF
    In super-resolution (SR), a set of degraded low-resolution (LR) images are used to reconstruct a higher-resolution image that suffers from acquisition degradations. One way to boost SR images visual quality is to use restoration filters to remove reconstructed images artifacts. We propose an efficient method to optimally allocate the LR pixels on the high-resolution grid and introduce a mathematical derivation of a stochastic Wiener filter. It relies on the continuous-discrete-continuous model and is constrained by the periodic and nonperiodic interrelationships between the different frequency components of the proposed SR system. We analyze an end-to-end model and formulate the Wiener filter as a function of the parameters associated with the proposed SR system such as image gathering and display response indices, system average signal-to-noise ratio, and inter-subpixel shifts between the LR images. Simulation and experimental results demonstrate that the derived Wiener filter with the optimal allocation of LR images results in sharper reconstruction. When compared with other SR techniques, our approach outperforms them in both quality and computational time

    Coordinate-Based Seismic Interpolation in Irregular Land Survey: A Deep Internal Learning Approach

    Full text link
    Physical and budget constraints often result in irregular sampling, which complicates accurate subsurface imaging. Pre-processing approaches, such as missing trace or shot interpolation, are typically employed to enhance seismic data in such cases. Recently, deep learning has been used to address the trace interpolation problem at the expense of large amounts of training data to adequately represent typical seismic events. Nonetheless, state-of-the-art works have mainly focused on trace reconstruction, with little attention having been devoted to shot interpolation. Furthermore, existing methods assume regularly spaced receivers/sources failing in approximating seismic data from real (irregular) surveys. This work presents a novel shot gather interpolation approach which uses a continuous coordinate-based representation of the acquired seismic wavefield parameterized by a neural network. The proposed unsupervised approach, which we call coordinate-based seismic interpolation (CoBSI), enables the prediction of specific seismic characteristics in irregular land surveys without using external data during neural network training. Experimental results on real and synthetic 3D data validate the ability of the proposed method to estimate continuous smooth seismic events in the time-space and frequency-wavenumber domains, improving sparsity or low rank-based interpolation methods

    Super-Resolution of Unmanned Airborne Vehicle Images with Maximum Fidelity Stochastic Restoration

    Get PDF
    Super-resolution (SR) refers to reconstructing a single high resolution (HR) image from a set of subsampled, blurred and noisy low resolution (LR) images. One may, then, envision a scenario where a set of LR images is acquired with sensors on a moving platform like unmanned airborne vehicles (UAV). Due to the wind, the UAV may encounter altitude change or rotational effects which can distort the acquired as well as the processed images. Also, the visual quality of the SR image is affected by image acquisition degradations, the available number of the LR images and their relative positions. This dissertation seeks to develop a novel fast stochastic algorithm to reconstruct a single SR image from UAV-captured images in two steps. First, the UAV LR images are aligned using a new hybrid registration algorithm within subpixel accuracy. In the second step, the proposed approach develops a new fast stochastic minimum square constrained Wiener restoration filter for SR reconstruction and restoration using a fully detailed continuous-discrete-continuous (CDC) model. A new parameter that accounts for LR images registration and fusion errors is added to the SR CDC model in addition to a multi-response restoration and reconstruction. Finally, to assess the visual quality of the resultant images, two figures of merit are introduced: information rate and maximum realizable fidelity. Experimental results show that quantitative assessment using the proposed figures coincided with the visual qualitative assessment. We evaluated our filter against other SR techniques and its results were found to be competitive in terms of speed and visual quality

    MMSE Reconstruction for 3D Freehand Ultrasound Imaging

    Get PDF
    The reconstruction of 3D ultrasound (US) images from mechanically registered, but otherwise irregularly positioned, B-scan slices is of great interest in image guided therapy procedures. Conventional 3D ultrasound algorithms have low computational complexity, but the reconstructed volume suffers from severe speckle contamination. Furthermore, the current method cannot reconstruct uniform high-resolution data from several low-resolution B-scans. In this paper, the minimum mean-squared error (MMSE) method is applied to 3D ultrasound reconstruction. Data redundancies due to overlapping samples as well as correlation of the target and speckle are naturally accounted for in the MMSE reconstruction algorithm. Thus, the reconstruction process unifies the interpolation and spatial compounding. Simulation results for synthetic US images are presented to demonstrate the excellent reconstruction

    SEISMIC DATA CONDITIONING AND ANALYSIS FOR FRACTURED RESERVOIRS

    Get PDF
    The ability to identify the intensity and orientation of fractures within both unconventional and conventional resources can have a critical impact on oil field development. Fractures and faults are often the primary pathways for hydrocarbon migration and production. Because of their complexity and commercial importance, fractures have been studied by each of the main disciplines – geology, geophysics, petrophysics, and engineering. The focus of this dissertation is to present an understanding of how different geophysical technologies can be used to characterize fractures at different scales. Seismic attributes are one of the main tools to map the distribution of fractures and can be categorized into geometric attributes, azimuthal velocity anisotropy, amplitude variation with offset and azimuth, and diffraction imaging. These categories are complementary to each other and can provide overlapping information. The diversity of the assumptions under each category makes it challenging to bridge the gap for real world applications. Acquisition footprint overprints most seismic surveys and can mask or in some cases be misinterpreted as underlying faults and fractures. There are two modern trends in imaging the subsurface with high quality 3D seismic surveys. The first is to acquire new high density, high fold, wide azimuth surveys that exhibit less footprint. The second is to combine multiple legacy surveys into “megamerge” (or even “gigamerge” surveys) that exhibit multiple footprint patterns. To address this latter problem, I start my dissertation by introducing an adaptive 2D continuous wavelet transform (CWT) footprint suppression workflow whose design is based on artefacts seen on seismic attributes. Suboptimum seismic acquisition is one of the major causes of acquisition footprint. 5D interpolation (also called 5D regularization) is a modern seismic processing workflow that attempts to fill in the missing offsets and azimuths. I examine the effect of a commercial Fourier-based 5D interpolation on both footprint artefacts and geologic discontinuities measured using seismic attributes. I find that by accurately interpolating specular reflections, 5D interpolation suppresses acquisition footprint and improves the lateral continuity of prestack inversion images of P-impedances. Unfortunately, 5D Fourier-based interpolation incorrectly corrects diffraction events and therefore attenuates faults and karst edges seen in coherence. Whereas 5D interpolation attempts to enhance the specular component of seismic data, diffraction imaging attempts to enhance the non-specular or diffracted component of the seismic data necessary to image fractures. Although the lateral resolution of diffractions is better than that of specular reflections, closely spaced fractures forming a “fracture swarm” may appear to be a single, larger fracture, while more laterally extensive fracture swarms give rise to azimuthal and offset anisotropy. I investigate each technique’s ability to detect fractures using forward modeling and find that diffraction’s focusing sensitivity to velocity inaccuracies makes it an excellent candidate to highlight close-spaced fractures. I also find that cross-correlating images of diffractions from nearby experiments is useful in constructing an objective function that can be used to update the velocity due in the image domain. I demonstrate the efficiency of these findings using synthetic models with different complexity. Azimuthal and offset anisotropy signature for irregularly spaced fractures is complex and different from the constant fracture spacing approximated by effective medium theory particularly for reflection below the fractures. I find isotropic amplitude variation modeling give an indication if fractures are located at the bottom portion of the reservoir

    Vision-based techniques for gait recognition

    Full text link
    Global security concerns have raised a proliferation of video surveillance devices. Intelligent surveillance systems seek to discover possible threats automatically and raise alerts. Being able to identify the surveyed object can help determine its threat level. The current generation of devices provide digital video data to be analysed for time varying features to assist in the identification process. Commonly, people queue up to access a facility and approach a video camera in full frontal view. In this environment, a variety of biometrics are available - for example, gait which includes temporal features like stride period. Gait can be measured unobtrusively at a distance. The video data will also include face features, which are short-range biometrics. In this way, one can combine biometrics naturally using one set of data. In this paper we survey current techniques of gait recognition and modelling with the environment in which the research was conducted. We also discuss in detail the issues arising from deriving gait data, such as perspective and occlusion effects, together with the associated computer vision challenges of reliable tracking of human movement. Then, after highlighting these issues and challenges related to gait processing, we proceed to discuss the frameworks combining gait with other biometrics. We then provide motivations for a novel paradigm in biometrics-based human recognition, i.e. the use of the fronto-normal view of gait as a far-range biometrics combined with biometrics operating at a near distance

    Super-resolution of 3-dimensional scenes

    Full text link
    Super-resolution is an image enhancement method that increases the resolution of images and video. Previously this technique could only be applied to 2D scenes. The super-resolution algorithm developed in this thesis creates high-resolution views of 3-dimensional scenes, using low-resolution images captured from varying, unknown positions
    corecore