7,101 research outputs found

    Quantitative analysis of the reconstruction performance of interpolants

    Get PDF
    The analysis presented provides a quantitative measure of the reconstruction or interpolation performance of linear, shift-invariant interpolants. The performance criterion is the mean square error of the difference between the sampled and reconstructed functions. The analysis is applicable to reconstruction algorithms used in image processing and to many types of splines used in numerical analysis and computer graphics. When formulated in the frequency domain, the mean square error clearly separates the contribution of the interpolation method from the contribution of the sampled data. The equations provide a rational basis for selecting an optimal interpolant; that is, one which minimizes the mean square error. The analysis has been applied to a selection of frequently used data splines and reconstruction algorithms: parametric cubic and quintic Hermite splines, exponential and nu splines (including the special case of the cubic spline), parametric cubic convolution, Keys' fourth-order cubic, and a cubic with a discontinuous first derivative. The emphasis in this paper is on the image-dependent case in which no a priori knowledge of the frequency spectrum of the sampled function is assumed

    Periodic Splines and Gaussian Processes for the Resolution of Linear Inverse Problems

    Get PDF
    This paper deals with the resolution of inverse problems in a periodic setting or, in other terms, the reconstruction of periodic continuous-domain signals from their noisy measurements. We focus on two reconstruction paradigms: variational and statistical. In the variational approach, the reconstructed signal is solution to an optimization problem that establishes a tradeoff between fidelity to the data and smoothness conditions via a quadratic regularization associated to a linear operator. In the statistical approach, the signal is modeled as a stationary random process defined from a Gaussian white noise and a whitening operator; one then looks for the optimal estimator in the mean-square sense. We give a generic form of the reconstructed signals for both approaches, allowing for a rigorous comparison of the two.We fully characterize the conditions under which the two formulations yield the same solution, which is a periodic spline in the case of sampling measurements. We also show that this equivalence between the two approaches remains valid on simulations for a broad class of problems. This extends the practical range of applicability of the variational method

    Particle-Particle, Particle-Scaling function (P3S) algorithm for electrostatic problems in free boundary conditions

    Get PDF
    An algorithm for fast calculation of the Coulombic forces and energies of point particles with free boundary conditions is proposed. Its calculation time scales as N log N for N particles. This novel method has lower crossover point with the full O(N^2) direct summation than the Fast Multipole Method. The forces obtained by our algorithm are analytical derivatives of the energy which guarantees energy conservation during a molecular dynamics simulation. Our algorithm is very simple. An MPI parallelised version of the code can be downloaded under the GNU General Public License from the website of our group.Comment: 19 pages, 11 figures, submitted to: Journal of Chemical Physic

    How to mesh up Ewald sums (I): A theoretical and numerical comparison of various particle mesh routines

    Full text link
    Standard Ewald sums, which calculate e.g. the electrostatic energy or the force in periodically closed systems of charged particles, can be efficiently speeded up by the use of the Fast Fourier Transformation (FFT). In this article we investigate three algorithms for the FFT-accelerated Ewald sum, which attracted a widespread attention, namely, the so-called particle-particle-particle-mesh (P3M), particle mesh Ewald (PME) and smooth PME method. We present a unified view of the underlying techniques and the various ingredients which comprise those routines. Additionally, we offer detailed accuracy measurements, which shed some light on the influence of several tuning parameters and also show that the existing methods -- although similar in spirit -- exhibit remarkable differences in accuracy. We propose combinations of the individual components, mostly relying on the P3M approach, which we regard as most flexible.Comment: 18 pages, 8 figures included, revtex styl

    Concepts for on-board satellite image registration, volume 1

    Get PDF
    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite

    The Panchromatic High-Resolution Spectroscopic Survey of Local Group Star Clusters - I. General Data Reduction Procedures for the VLT/X-shooter UVB and VIS arm

    Get PDF
    Our dataset contains spectroscopic observations of 29 globular clusters in the Magellanic Clouds and the Milky Way performed with VLT/X-shooter. Here we present detailed data reduction procedures for the VLT/X-shooter UVB and VIS arm. These are not restricted to our particular dataset, but are generally applicable to different kinds of X-shooter data without major limitation on the astronomical object of interest. ESO's X-shooter pipeline (v1.5.0) performs well and reliably for the wavelength calibration and the associated rectification procedure, yet we find several weaknesses in the reduction cascade that are addressed with additional calibration steps, such as bad pixel interpolation, flat fielding, and slit illumination corrections. Furthermore, the instrumental PSF is analytically modeled and used to reconstruct flux losses at slit transit and for optimally extracting point sources. Regular observations of spectrophotometric standard stars allow us to detect instrumental variability, which needs to be understood if a reliable absolute flux calibration is desired. A cascade of additional custom calibration steps is presented that allows for an absolute flux calibration uncertainty of less than ten percent under virtually every observational setup provided that the signal-to-noise ratio is sufficiently high. The optimal extraction increases the signal-to-noise ratio typically by a factor of 1.5, while simultaneously correcting for resulting flux losses. The wavelength calibration is found to be accurate to an uncertainty level of approximately 0.02 Angstrom. We find that most of the X-shooter systematics can be reliably modeled and corrected for. This offers the possibility of comparing observations on different nights and with different telescope pointings and instrumental setups, thereby facilitating a robust statistical analysis of large datasets.Comment: 22 pages, 18 figures, Accepted for publication in Astronomy & Astrophysics; V2 contains a minor change in the abstract. We note that we did not test X-shooter pipeline versions 2.0 or later. V3 contains an updated referenc
    corecore