17,568 research outputs found

    Extended object reconstruction in adaptive-optics imaging: the multiresolution approach

    Full text link
    We propose the application of multiresolution transforms, such as wavelets (WT) and curvelets (CT), to the reconstruction of images of extended objects that have been acquired with adaptive optics (AO) systems. Such multichannel approaches normally make use of probabilistic tools in order to distinguish significant structures from noise and reconstruction residuals. Furthermore, we aim to check the historical assumption that image-reconstruction algorithms using static PSFs are not suitable for AO imaging. We convolve an image of Saturn taken with the Hubble Space Telescope (HST) with AO PSFs from the 5-m Hale telescope at the Palomar Observatory and add both shot and readout noise. Subsequently, we apply different approaches to the blurred and noisy data in order to recover the original object. The approaches include multi-frame blind deconvolution (with the algorithm IDAC), myopic deconvolution with regularization (with MISTRAL) and wavelets- or curvelets-based static PSF deconvolution (AWMLE and ACMLE algorithms). We used the mean squared error (MSE) and the structural similarity index (SSIM) to compare the results. We discuss the strengths and weaknesses of the two metrics. We found that CT produces better results than WT, as measured in terms of MSE and SSIM. Multichannel deconvolution with a static PSF produces results which are generally better than the results obtained with the myopic/blind approaches (for the images we tested) thus showing that the ability of a method to suppress the noise and to track the underlying iterative process is just as critical as the capability of the myopic/blind approaches to update the PSF.Comment: In revision in Astronomy & Astrophysics. 19 pages, 13 figure

    Frequency-modulated continuous-wave LiDAR compressive depth-mapping

    Get PDF
    We present an inexpensive architecture for converting a frequency-modulated continuous-wave LiDAR system into a compressive-sensing based depth-mapping camera. Instead of raster scanning to obtain depth-maps, compressive sensing is used to significantly reduce the number of measurements. Ideally, our approach requires two difference detectors. % but can operate with only one at the cost of doubling the number of measurments. Due to the large flux entering the detectors, the signal amplification from heterodyne detection, and the effects of background subtraction from compressive sensing, the system can obtain higher signal-to-noise ratios over detector-array based schemes while scanning a scene faster than is possible through raster-scanning. %Moreover, we show how a single total-variation minimization and two fast least-squares minimizations, instead of a single complex nonlinear minimization, can efficiently recover high-resolution depth-maps with minimal computational overhead. Moreover, by efficiently storing only 2m2m data points from m<nm<n measurements of an nn pixel scene, we can easily extract depths by solving only two linear equations with efficient convex-optimization methods

    Autofocus for digital Fresnel holograms by use of a Fresnelet-sparsity criterion

    Get PDF
    We propose a robust autofocus method for reconstructing digital Fresnel holograms. The numerical reconstruction involves simulating the propagation of a complex wave front to the appropriate distance. Since the latter value is difficult to determine manually, it is desirable to rely on an automatic procedure for finding the optimal distance to achieve high-quality reconstructions. Our algorithm maximizes a sharpness metric related to the sparsity of the signal’s expansion in distance-dependent waveletlike Fresnelet bases. We show results from simulations and experimental situations that confirm its applicability

    Comparison of source detection procedures for XMM-Newton images

    Full text link
    Procedures based on current methods to detect sources in X-ray images are applied to simulated XMM images. All significant instrumental effects are taken into account, and two kinds of sources are considered -- unresolved sources represented by the telescope PSF and extended ones represented by a b-profile model. Different sets of test cases with controlled and realistic input configurations are constructed in order to analyze the influence of confusion on the source analysis and also to choose the best methods and strategies to resolve the difficulties. In the general case of point-like and extended objects the mixed approach of multiresolution (wavelet) filtering and subsequent detection by SExtractor gives the best results. In ideal cases of isolated sources, flux errors are within 15-20%. The maximum likelihood technique outperforms the others for point-like sources when the PSF model used in the fit is the same as in the images. However, the number of spurious detections is quite large. The classification using the half-light radius and SExtractor stellarity index is succesful in more than 98% of the cases. This suggests that average luminosity clusters of galaxies (L_[2-10] ~ 3x10^{44} erg/s) can be detected at redshifts greater than 1.5 for moderate exposure times in the energy band below 5 keV, provided that there is no confusion or blending by nearby sources. We find also that with the best current available packages, confusion and completeness problems start to appear at fluxes around 6x10^{-16} erg/s/cm^2 in [0.5-2] keV band for XMM deep surveys.Comment: 20 pages, 16 figures. Accepted for publication in A&
    • …
    corecore