792 research outputs found

    Stable soft extrapolation of entire functions

    Full text link
    Soft extrapolation refers to the problem of recovering a function from its samples, multiplied by a fast-decaying window and perturbed by an additive noise, over an interval which is potentially larger than the essential support of the window. A core theoretical question is to provide bounds on the possible amount of extrapolation, depending on the sample perturbation level and the function prior. In this paper we consider soft extrapolation of entire functions of finite order and type (containing the class of bandlimited functions as a special case), multiplied by a super-exponentially decaying window (such as a Gaussian). We consider a weighted least-squares polynomial approximation with judiciously chosen number of terms and a number of samples which scales linearly with the degree of approximation. It is shown that this simple procedure provides stable recovery with an extrapolation factor which scales logarithmically with the perturbation level and is inversely proportional to the characteristic lengthscale of the function. The pointwise extrapolation error exhibits a H\"{o}lder-type continuity with an exponent derived from weighted potential theory, which changes from 1 near the available samples, to 0 when the extrapolation distance reaches the characteristic smoothness length scale of the function. The algorithm is asymptotically minimax, in the sense that there is essentially no better algorithm yielding meaningfully lower error over the same smoothness class. When viewed in the dual domain, the above problem corresponds to (stable) simultaneous de-convolution and super-resolution for objects of small space/time extent. Our results then show that the amount of achievable super-resolution is inversely proportional to the object size, and therefore can be significant for small objects

    The Role of Probe Attenuation in the Time-Domain Reflectometry Characterization of Dielectrics

    Get PDF
    The influence of the measurement setup on the estimation of dielectric permittivity spectra from time-domain reflectometry (TDR) responses is investigated. The analysis is based on a simplified model of the TDR measurement setup, where an ideal voltage step is applied to an ideal transmission line that models the probe. The main result of this analysis is that the propagation in the probe has an inherent band limiting effect, and the estimation of the high-frequency permittivity parameters is well conditioned only if the wave attenuation for a round trip propagation in the dielectric sample is small. This is a general result, holding for most permittivity model and estimation scheme. It has been verified on real estimation problems by estimating the permittivity of liquid dielectrics and soil samples via an high-order model of the TDR setup and a parametric inversion approac

    Concepts for on-board satellite image registration, volume 1

    Get PDF
    The NASA-NEEDS program goals present a requirement for on-board signal processing to achieve user-compatible, information-adaptive data acquisition. One very specific area of interest is the preprocessing required to register imaging sensor data which have been distorted by anomalies in subsatellite-point position and/or attitude control. The concepts and considerations involved in using state-of-the-art positioning systems such as the Global Positioning System (GPS) in concert with state-of-the-art attitude stabilization and/or determination systems to provide the required registration accuracy are discussed with emphasis on assessing the accuracy to which a given image picture element can be located and identified, determining those algorithms required to augment the registration procedure and evaluating the technology impact on performing these procedures on-board the satellite

    Building and Validating a Model for Investigating the Dynamics of Isolated Water Molecules

    Get PDF
    Understanding how water molecules behave in isolation is vital to understand many fundamental processes in nature. To that end, scientists have begun studying crystals in which single water molecules become trapped in regularly occurring cavities in the crystal structure. As part of that investigation, numerical models used to investigate the dynamics of isolated water molecules are sought to help bolster our fundamental understanding of how these systems behave. To that end, the efficacy of three computational methods—the Euler Method, the Euler-Aspel Method and the Beeman Method—is compared using a newly defined parameter, called the predictive stability coefficient ρ. This new parameter quantifies each algorithm\u27s stability such that the Euler-Aspel Method is determined to be relatively the most stable. Finally, preliminary results from investigating interactions between two dipole neighbors show that the computational tools that will be used for future investigations have been programmed correctly

    Sampling and Reconstruction of Shapes with Algebraic Boundaries

    Get PDF
    We present a sampling theory for a class of binary images with finite rate of innovation (FRI). Every image in our model is the restriction of \mathds{1}_{\{p\leq0\}} to the image plane, where \mathds{1} denotes the indicator function and pp is some real bivariate polynomial. This particularly means that the boundaries in the image form a subset of an algebraic curve with the implicit polynomial pp. We show that the image parameters --i.e., the polynomial coefficients-- satisfy a set of linear annihilation equations with the coefficients being the image moments. The inherent sensitivity of the moments to noise makes the reconstruction process numerically unstable and narrows the choice of the sampling kernels to polynomial reproducing kernels. As a remedy to these problems, we replace conventional moments with more stable \emph{generalized moments} that are adjusted to the given sampling kernel. The benefits are threefold: (1) it relaxes the requirements on the sampling kernels, (2) produces annihilation equations that are robust at numerical precision, and (3) extends the results to images with unbounded boundaries. We further reduce the sensitivity of the reconstruction process to noise by taking into account the sign of the polynomial at certain points, and sequentially enforcing measurement consistency. We consider various numerical experiments to demonstrate the performance of our algorithm in reconstructing binary images, including low to moderate noise levels and a range of realistic sampling kernels.Comment: 12 pages, 14 figure

    Characterization of surface profiles using discrete measurement systems

    Get PDF
    Form error estimation techniques based on discrete point measurements can lead to significant errors in form tolerance evaluation. By modeling surface profiles as random variables, we are able to show how sample size and fitting techniques affect form error estimation. Depending on the surface characteristics, typical sampling techniques can result in estimation errors of as much as 50%;We investigate current available interpolation procedures. Kriging is an optimal interpolation for spatial data when the model of variogram is known a priori. Due to the difficulty in identifying the correct variogram model from the limited sampled data and lack of complete computer software, there is no significant advantage to apply kriging to estimate form error in the inspection process;We apply the Shannon sampling theorem and represent the surface profiles as band-limited signals. We show that the Shannon sampling function is in fact an infinite degree B-spline interpolation function and thus a best approximation for band-limited signals. Both Shannon sampling series and universal kriging (using a priori correlation function) are applied to flatness error estimation for uniform sample points measured from five common machined surfaces. The results show both methods perform similarly. The probability of over-estimating form error increases and the probability of accepting bad parts decreases using interpolation methods versus using the points directly
    corecore