1,483 research outputs found

    Interpolating point spread function anisotropy

    Full text link
    Planned wide-field weak lensing surveys are expected to reduce the statistical errors on the shear field to unprecedented levels. In contrast, systematic errors like those induced by the convolution with the point spread function (PSF) will not benefit from that scaling effect and will require very accurate modeling and correction. While numerous methods have been devised to carry out the PSF correction itself, modeling of the PSF shape and its spatial variations across the instrument field of view has, so far, attracted much less attention. This step is nevertheless crucial because the PSF is only known at star positions while the correction has to be performed at any position on the sky. A reliable interpolation scheme is therefore mandatory and a popular approach has been to use low-order bivariate polynomials. In the present paper, we evaluate four other classical spatial interpolation methods based on splines (B-splines), inverse distance weighting (IDW), radial basis functions (RBF) and ordinary Kriging (OK). These methods are tested on the Star-challenge part of the GRavitational lEnsing Accuracy Testing 2010 (GREAT10) simulated data and are compared with the classical polynomial fitting (Polyfit). We also test all our interpolation methods independently of the way the PSF is modeled, by interpolating the GREAT10 star fields themselves (i.e., the PSF parameters are known exactly at star positions). We find in that case RBF to be the clear winner, closely followed by the other local methods, IDW and OK. The global methods, Polyfit and B-splines, are largely behind, especially in fields with (ground-based) turbulent PSFs. In fields with non-turbulent PSFs, all interpolators reach a variance on PSF systematics σsys2\sigma_{sys}^2 better than the 1×10−71\times10^{-7} upper bound expected by future space-based surveys, with the local interpolators performing better than the global ones

    Image interpolation using Shearlet based iterative refinement

    Get PDF
    This paper proposes an image interpolation algorithm exploiting sparse representation for natural images. It involves three main steps: (a) obtaining an initial estimate of the high resolution image using linear methods like FIR filtering, (b) promoting sparsity in a selected dictionary through iterative thresholding, and (c) extracting high frequency information from the approximation to refine the initial estimate. For the sparse modeling, a shearlet dictionary is chosen to yield a multiscale directional representation. The proposed algorithm is compared to several state-of-the-art methods to assess its objective as well as subjective performance. Compared to the cubic spline interpolation method, an average PSNR gain of around 0.8 dB is observed over a dataset of 200 images

    Hadron Spectroscopy with Dynamical Chirally Improved Fermions

    Full text link
    We simulate two dynamical, mass degenerate light quarks on 16^3x32 lattices with a spatial extent of 2.4 fm using the Chirally Improved Dirac operator. The simulation method, the implementation of the action and signals of equilibration are discussed in detail. Based on the eigenvalues of the Dirac operator we discuss some qualitative features of our approach. Results for ground state masses of pseudoscalar and vector mesons as well as for the nucleon and delta baryons are presented.Comment: 26 pages, 17 figures, 10 table

    Fractionally-addressed delay lines

    Full text link
    While traditional implementations of variable-length digital delay lines are based on a circular buffer accessed by two pointers, we propose an implementation where a single fractional pointer is used both for read and write operations. On modern general-purpose architectures, the proposed method is nearly as efficient as the popularinterpolated circular buffer, and it behaves well for delay-length modulations commonly found in digital audio effects. The physical interpretation of the new implementation shows that it is suitable for simulating tension or density modulations in wave-propagating media.Comment: 11 pages, 19 figures, to be published in IEEE Transactions on Speech and Audio Processing Corrected ACM-clas

    Concepts for on-board satellite image registration. Volume 2: IAS prototype performance evaluation standard definition

    Get PDF
    Problems encountered in testing onboard signal processing hardware designed to achieve radiometric and geometric correction of satellite imaging data are considered. These include obtaining representative image and ancillary data for simulation and the transfer and storage of a large quantity of image data at very high speed. The high resolution, high speed preprocessing of LANDSAT-D imagery is considered

    Airborne LiDAR for DEM generation: some critical issues

    Get PDF
    Airborne LiDAR is one of the most effective and reliable means of terrain data collection. Using LiDAR data for DEM generation is becoming a standard practice in spatial related areas. However, the effective processing of the raw LiDAR data and the generation of an efficient and high-quality DEM remain big challenges. This paper reviews the recent advances of airborne LiDAR systems and the use of LiDAR data for DEM generation, with special focus on LiDAR data filters, interpolation methods, DEM resolution, and LiDAR data reduction. Separating LiDAR points into ground and non-ground is the most critical and difficult step for DEM generation from LiDAR data. Commonly used and most recently developed LiDAR filtering methods are presented. Interpolation methods and choices of suitable interpolator and DEM resolution for LiDAR DEM generation are discussed in detail. In order to reduce the data redundancy and increase the efficiency in terms of storage and manipulation, LiDAR data reduction is required in the process of DEM generation. Feature specific elements such as breaklines contribute significantly to DEM quality. Therefore, data reduction should be conducted in such a way that critical elements are kept while less important elements are removed. Given the highdensity characteristic of LiDAR data, breaklines can be directly extracted from LiDAR data. Extraction of breaklines and integration of the breaklines into DEM generation are presented

    Iterative Tomographic Image Reconstruction Using Fourier-Based Forward and Back-Projectors

    Full text link
    Iterative image reconstruction algorithms play an increasingly important role in modern tomographic systems, especially in emission tomography. With the fast increase of the sizes of the tomographic data, reduction of the computation demands of the reconstruction algorithms is of great importance. Fourier-based forward and back-projection methods have the potential to considerably reduce the computation time in iterative reconstruction. Additional substantial speed-up of those approaches can be obtained utilizing powerful and cheap off-the-shelf fast Fourier transform (FFT) processing hardware. The Fourier reconstruction approaches are based on the relationship between the Fourier transform of the image and Fourier transformation of the parallel-ray projections. The critical two steps are the estimations of the samples of the projection transform, on the central section through the origin of Fourier space, from the samples of the transform of the image, and vice versa for back-projection. Interpolation errors are a limitation of Fourier-based reconstruction methods. We have applied min-max optimized Kaiser-Bessel interpolation within the nonuniform FFT (NUFFT) framework and devised ways of incorporation of resolution models into the Fourier-based iterative approaches. Numerical and computer simulation results show that the min-max NUFFT approach provides substantially lower approximation errors in tomographic forward and back-projection than conventional interpolation methods. Our studies have further confirmed that Fourier-based projectors using the NUFFT approach provide accurate approximations to their space-based counterparts but with about ten times faster computation, and that they are viable candidates for fast iterative image reconstruction.Peer Reviewedhttp://deepblue.lib.umich.edu/bitstream/2027.42/85804/1/Fessler62.pd

    Concepts for on-board satellite image registration, volume 1

    Get PDF
    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite

    Hybrid PDE solver for data-driven problems and modern branching

    Full text link
    The numerical solution of large-scale PDEs, such as those occurring in data-driven applications, unavoidably require powerful parallel computers and tailored parallel algorithms to make the best possible use of them. In fact, considerations about the parallelization and scalability of realistic problems are often critical enough to warrant acknowledgement in the modelling phase. The purpose of this paper is to spread awareness of the Probabilistic Domain Decomposition (PDD) method, a fresh approach to the parallelization of PDEs with excellent scalability properties. The idea exploits the stochastic representation of the PDE and its approximation via Monte Carlo in combination with deterministic high-performance PDE solvers. We describe the ingredients of PDD and its applicability in the scope of data science. In particular, we highlight recent advances in stochastic representations for nonlinear PDEs using branching diffusions, which have significantly broadened the scope of PDD. We envision this work as a dictionary giving large-scale PDE practitioners references on the very latest algorithms and techniques of a non-standard, yet highly parallelizable, methodology at the interface of deterministic and probabilistic numerical methods. We close this work with an invitation to the fully nonlinear case and open research questions.Comment: 23 pages, 7 figures; Final SMUR version; To appear in the European Journal of Applied Mathematics (EJAM
    • …
    corecore