74,601 research outputs found

    Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    Get PDF
    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman-McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques

    Adaptive foveated single-pixel imaging with dynamic super-sampling

    Get PDF
    As an alternative to conventional multi-pixel cameras, single-pixel cameras enable images to be recorded using a single detector that measures the correlations between the scene and a set of patterns. However, to fully sample a scene in this way requires at least the same number of correlation measurements as there are pixels in the reconstructed image. Therefore single-pixel imaging systems typically exhibit low frame-rates. To mitigate this, a range of compressive sensing techniques have been developed which rely on a priori knowledge of the scene to reconstruct images from an under-sampled set of measurements. In this work we take a different approach and adopt a strategy inspired by the foveated vision systems found in the animal kingdom - a framework that exploits the spatio-temporal redundancy present in many dynamic scenes. In our single-pixel imaging system a high-resolution foveal region follows motion within the scene, but unlike a simple zoom, every frame delivers new spatial information from across the entire field-of-view. Using this approach we demonstrate a four-fold reduction in the time taken to record the detail of rapidly evolving features, whilst simultaneously accumulating detail of more slowly evolving regions over several consecutive frames. This tiered super-sampling technique enables the reconstruction of video streams in which both the resolution and the effective exposure-time spatially vary and adapt dynamically in response to the evolution of the scene. The methods described here can complement existing compressive sensing approaches and may be applied to enhance a variety of computational imagers that rely on sequential correlation measurements.Comment: 13 pages, 5 figure

    Dense and accurate motion and strain estimation in high resolution speckle images using an image-adaptive approach

    Get PDF
    Digital image processing methods represent a viable and well acknowledged alternative to strain gauges and interferometric techniques for determining full-field displacements and strains in materials under stress. This paper presents an image adaptive technique for dense motion and strain estimation using high-resolution speckle images that show the analyzed material in its original and deformed states. The algorithm starts by dividing the speckle image showing the original state into irregular cells taking into consideration both spatial and gradient image information present. Subsequently the Newton-Raphson digital image correlation technique is applied to calculate the corresponding motion for each cell. Adaptive spatial regularization in the form of the Geman-McClure robust spatial estimator is employed to increase the spatial consistency of the motion components of a cell with respect to the components of neighbouring cells. To obtain the final strain information, local least-squares fitting using a linear displacement model is performed on the horizontal and vertical displacement fields. To evaluate the presented image partitioning and strain estimation techniques two numerical and two real experiments are employed. The numerical experiments simulate the deformation of a specimen with constant strain across the surface as well as small rigid-body rotations present while real experiments consist specimens that undergo uniaxial stress. The results indicate very good accuracy of the recovered strains as well as better rotation insensitivity compared to classical techniques

    Deformation Measurements at the Sub-Micron Size Scale: II. Refinements in the Algorithm for Digital Image Correction

    Get PDF
    Improvements are proposed in the application of the Digital Image Correlation method, a technique that compares digital images of a specimen surface before and after deformation to deduce its sureface (2-D) displacement field and strains. These refinements, tested on translations and rigid body rotations were significant with regard to the computer efficiency and covergence properties of the method. In addition, the formulation of the algorithm was extended so as to compute the three-dimensional surface displacement field from Scanning Tunneling Microscope tomographies of a deforming specimen. The reolsution of this new displacement measuring method at the namometer scale was assessed on translation and uniaxial tensile tests and was found to be 4.8 nm for in-plane displacement components and 1.5 nm for the out-of-plane one spanning a 10 x 10 μm area

    Three-Dimensional Spectral-Domain Optical Coherence Tomography Data Analysis for Glaucoma Detection

    Get PDF
    Purpose: To develop a new three-dimensional (3D) spectral-domain optical coherence tomography (SD-OCT) data analysis method using a machine learning technique based on variable-size super pixel segmentation that efficiently utilizes full 3D dataset to improve the discrimination between early glaucomatous and healthy eyes. Methods: 192 eyes of 96 subjects (44 healthy, 59 glaucoma suspect and 89 glaucomatous eyes) were scanned with SD-OCT. Each SD-OCT cube dataset was first converted into 2D feature map based on retinal nerve fiber layer (RNFL) segmentation and then divided into various number of super pixels. Unlike the conventional super pixel having a fixed number of points, this newly developed variable-size super pixel is defined as a cluster of homogeneous adjacent pixels with variable size, shape and number. Features of super pixel map were extracted and used as inputs to machine classifier (LogitBoost adaptive boosting) to automatically identify diseased eyes. For discriminating performance assessment, area under the curve (AUC) of the receiver operating characteristics of the machine classifier outputs were compared with the conventional circumpapillary RNFL (cpRNFL) thickness measurements. Results: The super pixel analysis showed statistically significantly higher AUC than the cpRNFL (0.855 vs. 0.707, respectively, p = 0.031, Jackknife test) when glaucoma suspects were discriminated from healthy, while no significant difference was found when confirmed glaucoma eyes were discriminated from healthy eyes. Conclusions: A novel 3D OCT analysis technique performed at least as well as the cpRNFL in glaucoma discrimination and even better at glaucoma suspect discrimination. This new method has the potential to improve early detection of glaucomatous damage. © 2013 Xu et al

    A PSF-based approach to Kepler/K2 data. I. Variability within the K2 Campaign 0 star clusters M 35 and NGC 2158

    Full text link
    Kepler and K2 data analysis reported in the literature is mostly based on aperture photometry. Because of Kepler's large, undersampled pixels and the presence of nearby sources, aperture photometry is not always the ideal way to obtain high-precision photometry and, because of this, the data set has not been fully exploited so far. We present a new method that builds on our experience with undersampled HST images. The method involves a point-spread function (PSF) neighbour-subtraction and was specifically developed to exploit the huge potential offered by the K2 "super-stamps" covering the core of dense star clusters. Our test-bed targets were the NGC 2158 and M 35 regions observed during the K2 Campaign 0. We present our PSF modeling and demonstrate that, by using a high-angular-resolution input star list from the Asiago Schmidt telescope as the basis for PSF neighbour subtraction, we are able to reach magnitudes as faint as Kp~24 with a photometric precision of 10% over 6.5 hours, even in the densest regions. At the bright end, our photometric precision reaches ~30 parts-per-million. Our method leads to a considerable level of improvement at the faint magnitudes (Kp>15.5) with respect to the classical aperture photometry. This improvement is more significant in crowded regions. We also extracted raw light curves of ~60,000 stars and detrended them for systematic effects induced by spacecraft motion and other artifacts that harms K2 photometric precision. We present a list of 2133 variables.Comment: 27 pages (included appendix), 2 tables, 25 figures (5 in low resolution). Accepted for publication in MNRAS on November 05, 2015. Online materials will be available on the Journal website soo
    • …
    corecore