2,761 research outputs found

    Convolutional Deblurring for Natural Imaging

    Full text link
    In this paper, we propose a novel design of image deblurring in the form of one-shot convolution filtering that can directly convolve with naturally blurred images for restoration. The problem of optical blurring is a common disadvantage to many imaging applications that suffer from optical imperfections. Despite numerous deconvolution methods that blindly estimate blurring in either inclusive or exclusive forms, they are practically challenging due to high computational cost and low image reconstruction quality. Both conditions of high accuracy and high speed are prerequisites for high-throughput imaging platforms in digital archiving. In such platforms, deblurring is required after image acquisition before being stored, previewed, or processed for high-level interpretation. Therefore, on-the-fly correction of such images is important to avoid possible time delays, mitigate computational expenses, and increase image perception quality. We bridge this gap by synthesizing a deconvolution kernel as a linear combination of Finite Impulse Response (FIR) even-derivative filters that can be directly convolved with blurry input images to boost the frequency fall-off of the Point Spread Function (PSF) associated with the optical blur. We employ a Gaussian low-pass filter to decouple the image denoising problem for image edge deblurring. Furthermore, we propose a blind approach to estimate the PSF statistics for two Gaussian and Laplacian models that are common in many imaging pipelines. Thorough experiments are designed to test and validate the efficiency of the proposed method using 2054 naturally blurred images across six imaging applications and seven state-of-the-art deconvolution methods.Comment: 15 pages, for publication in IEEE Transaction Image Processin

    Introducing PHAEDRA: a new spectral code for simulations of relativistic magnetospheres

    Full text link
    We describe a new scheme for evolving the equations of force-free electrodynamics, the vanishing-inertia limit of magnetohydrodynamics. This pseudospectral code uses global orthogonal basis function expansions to take accurate spatial derivatives, allowing the use of an unstaggered mesh and the complete force-free current density. The method has low numerical dissipation and diffusion outside of singular current sheets. We present a range of one- and two-dimensional tests, and demonstrate convergence to both smooth and discontinuous analytic solutions. As a first application, we revisit the aligned rotator problem, obtaining a steady solution with resistivity localised in the equatorial current sheet outside the light cylinder.Comment: 23 pages, 18 figures, accepted for publication in MNRA

    Analog and digital worlds: Part 2. Fourier analysis in signals and data treatment

    Get PDF
    The most direct scope of Fourier Transform (FT) is to give an alternative representation of a signal: from the original domain to the corresponding frequency domain. The original domain can be time, space or any other independent variable that can be used as the domain of the function. This subject has been treated in Part 1 [1]. In particular, the FT of a signal, also referred to as the frequency spectrum of a signal, has been used to calculate the lowest sampling frequency that provides a correct representation of the signal itself. At the beginning of this contribution, it is illustrated how to implement the so-called windowing process to periodic sequences. Then, the meaning of the operations denominated convolution and deconvolution is discussed. It is shown how FT provides a very effective path to the execution of these operations in the alternative domain by employing the convolution theorem. Finally, the application of convolution and deconvolution operations to experimental signals associated with the 'spontaneous' convolution of two concurrent events is analysed by different examples

    Predicted Deepwater Bathymetry From Satellite Altimetry: Non-Fourier Transform Alternatives

    Get PDF
    Robert Parker (1972) demonstrated the effectiveness of Fourier Transforms (FT) to compute gravitational potential anomalies caused by uneven, non-uniform layers of material. This important calculation relates the gravitational potential anomaly to sea-floor topography. As outlined by Sandwell and Smith (1997), a six-step procedure, utilizing the FT, then demonstrated how satellite altimetry measurements of marine geoid height are inverted into seafloor topography. However, FTs are not local in space and produce Gibb’s phenomenon around discontinuities. Seafloor features exhibit spatial locality and features such as seamounts and ridges often have sharp inclines. Initial tests compared the windowed-FT to wavelets in reconstruction of the step and saw-tooth functions and resulted in lower Root Mean Square (RMS) error with fewer coefficients. This investigation, thus, examined the feasibility of utilizing sparser base functions such as the Mexican Hat Wavelet, which is local in space, to first calculate the gravitational potential, and then relate it to sea-floor topography, with the aim of improving satellite derived bathymetry maps

    A simple approach to the suppression of the Gibbs phenomenon in diffractive numerical calculations

    Get PDF
    The Gibbs phenomenon is a well-known effect that is produced at discontinuities of a function represented by the Fourier expansion when it is truncated to perform numerical calculations. This phenomenon appears because it is not possible to fit a discontinuous function as the summation of continuous functions, such as it is done with the Fourier expansion. Only considering infinite terms of the summation, the Fourier expansion fits the real signal. From a general point of view, it will affect to the final results since the representation of the signal does not include higher frequencies. It is true that the higher is the truncation, the better are the results, but an error is always committed. The Gibbs phenomenon has been studied in electric signal and diffractive optics, where the Fourier expansion is commonly used. In this work, we drop complex mathematics to show the effect of the Gibbs phenomenon on the near field propagation of diffraction gratings (self-imaging phenomenon) and also possible implementations of some corrections which allow diminishing the analytical or numerical errors in comparison with less accurate approaches. Anyway, the conclusions of this work would be applicable to other numerically solved diffractive problems which include sharp edges apertures. Simulations are compared with experiments giving interesting results. © 2021 Elsevier Gmb

    Directional edge and texture representations for image processing

    Get PDF
    An efficient representation for natural images is of fundamental importance in image processing and analysis. The commonly used separable transforms such as wavelets axe not best suited for images due to their inability to exploit directional regularities such as edges and oriented textural patterns; while most of the recently proposed directional schemes cannot represent these two types of features in a unified transform. This thesis focuses on the development of directional representations for images which can capture both edges and textures in a multiresolution manner. The thesis first considers the problem of extracting linear features with the multiresolution Fourier transform (MFT). Based on a previous MFT-based linear feature model, the work extends the extraction method into the situation when the image is corrupted by noise. The problem is tackled by the combination of a "Signal+Noise" frequency model, a refinement stage and a robust classification scheme. As a result, the MFT is able to perform linear feature analysis on noisy images on which previous methods failed. A new set of transforms called the multiscale polar cosine transforms (MPCT) are also proposed in order to represent textures. The MPCT can be regarded as real-valued MFT with similar basis functions of oriented sinusoids. It is shown that the transform can represent textural patches more efficiently than the conventional Fourier basis. With a directional best cosine basis, the MPCT packet (MPCPT) is shown to be an efficient representation for edges and textures, despite its high computational burden. The problem of representing edges and textures in a fixed transform with less complexity is then considered. This is achieved by applying a Gaussian frequency filter, which matches the disperson of the magnitude spectrum, on the local MFT coefficients. This is particularly effective in denoising natural images, due to its ability to preserve both types of feature. Further improvements can be made by employing the information given by the linear feature extraction process in the filter's configuration. The denoising results compare favourably against other state-of-the-art directional representations

    Einstein equations in the null quasi-spherical gauge III: numerical algorithms

    Get PDF
    We describe numerical techniques used in the construction of our 4th order evolution for the full Einstein equations, and assess the accuracy of representative solutions. The code is based on a null gauge with a quasi-spherical radial coordinate, and simulates the interaction of a single black hole with gravitational radiation. Techniques used include spherical harmonic representations, convolution spline interpolation and filtering, and an RK4 "method of lines" evolution. For sample initial data of "intermediate" size (gravitational field with 19% of the black hole mass), the code is accurate to 1 part in 10^5, until null time z=55 when the coordinate condition breaks down.Comment: Latex, 38 pages, 29 figures (360Kb compressed

    Data Acquisition, Analysis and Simulations for the Fermilab Muon \u3ci\u3eg−2\u3c/i\u3e Experiment

    Get PDF
    The goal of the new Muon g-2 E989 experiment at Fermi National Accelerator Laboratory (FNAL) is a precise measurement of the muon anomalous magnetic moment, aμ ≡ (g-2)/2. The previous BNL experiment measured the anomaly aμ(BNL) with an uncertainty of 0.54 parts per million (ppm). The discrepancy between the current standard model calculation of the aμ(SM) and the previous measurement aμ(BNL) is over 3σ. The FNAL Muon g-2 experiment aims at increasing the precision to 140 parts per billion (ppb) to resolve the discrepancy between the theoretical calculation and the experiment result. The anomaly, aμ is determined experimentally by measuring two frequencies. The magnetic field of the storage ring is measured with NMR probes and given in terms of equivalent proton spin precession frequency ωp in a spherical water sample at 34.7 °C. The difference frequency ωa between the muon spin-precession frequency and the cyclotron frequency in the storage ring magnetic field is encoded in the energy of the positrons from the muon decay and is measured with 24 electromagnetic calorimeters. By calculating the ratio ωa/ωp and combining with known constants, we can extract the anomaly aμ. This dissertation describes my contribution to the experiment, focusing on the extraction of the frequency ωa. My work can be classified into three categories: 1. Fast Data Acquisition (DAQ) system development, 2. A frequency-domain filtering approach to the analysis of the energy-integrated ωa data, 3. A GPU-based Monte Carlo of the frequency-domain filtering approach. The GPS timestamps readout, the DAQ health monitor and GPS data quality monitor page are presented in the Chapter 3. The FFT-based digital filtering analysis is presented in the Chapter 4. The GPU-based Monte Carlo simulation is presented in Chapter 5. The analysis work in the dissertation is based on the Run-1 data which is collected from March 2018 to July 2018
    • …
    corecore