2,476 research outputs found

    SLIC Based Digital Image Enlargement

    Full text link
    Low resolution image enhancement is a classical computer vision problem. Selecting the best method to reconstruct an image to a higher resolution with the limited data available in the low-resolution image is quite a challenge. A major drawback from the existing enlargement techniques is the introduction of color bleeding while interpolating pixels over the edges that separate distinct colors in an image. The color bleeding causes to accentuate the edges with new colors as a result of blending multiple colors over adjacent regions. This paper proposes a novel approach to mitigate the color bleeding by segmenting the homogeneous color regions of the image using Simple Linear Iterative Clustering (SLIC) and applying a higher order interpolation technique separately on the isolated segments. The interpolation at the boundaries of each of the isolated segments is handled by using a morphological operation. The approach is evaluated by comparing against several frequently used image enlargement methods such as bilinear and bicubic interpolation by means of Peak Signal-to-Noise-Ratio (PSNR) value. The results obtained exhibit that the proposed method outperforms the baseline methods by means of PSNR and also mitigates the color bleeding at the edges which improves the overall appearance.Comment: 6 page

    A PCA-based approach for subtracting thermal background emission in high-contrast imaging data

    Full text link
    Ground-based observations at thermal infrared wavelengths suffer from large background radiation due to the sky, telescope and warm surfaces in the instrument. This significantly limits the sensitivity of ground-based observations at wavelengths longer than 3 microns. We analyzed this background emission in infrared high contrast imaging data, show how it can be modelled and subtracted and demonstrate that it can improve the detection of faint sources, such as exoplanets. We applied principal component analysis to model and subtract the thermal background emission in three archival high contrast angular differential imaging datasets in the M and L filter. We describe how the algorithm works and explain how it can be applied. The results of the background subtraction are compared to the results from a conventional mean background subtraction scheme. Finally, both methods for background subtraction are also compared by performing complete data reductions. We analyze the results from the M dataset of HD100546 qualitatively. For the M band dataset of beta Pic and the L band dataset of HD169142, which was obtained with an annular groove phase mask vortex vector coronagraph, we also calculate and analyze the achieved signal to noise (S/N). We show that applying PCA is an effective way to remove spatially and temporarily varying thermal background emission down to close to the background limit. The procedure also proves to be very successful at reconstructing the background that is hidden behind the PSF. In the complete data reductions, we find at least qualitative improvements for HD100546 and HD169142, however, we fail to find a significant increase in S/N of beta Pic b. We discuss these findings and argue that in particular datasets with strongly varying observing conditions or infrequently sampled sky background will benefit from the new approach.Comment: 12 pages, 17 figures, 1 table, Accepted for publication in A&

    Simulations of Strong Gravitational Lensing with Substructure

    Full text link
    Galactic sized gravitational lenses are simulated by combining a cosmological N-body simulation and models for the baryonic component of the galaxy. The lens caustics, critical curves, image locations and magnification ratios are calculated by ray-shooting on an adaptive grid. When the source is near a cusp in a smooth lens' caustic the sum of the magnifications of the three closest images should be close to zero. It is found that in the observed cases this sum is generally too large to be consistent with the simulations implying that there is not enough substructure in the simulations. This suggests that other factors play an important role. These may include limited numerical resolution, lensing by structure outside the halo, selection bias and the possibility that a randomly selected galaxy halo may be more irregular, for example due to recent mergers, than the isolated halo used in this study. It is also shown that, with the level of substructure computed from the N-body simulations, the image magnifications of the Einstein cross type lenses are very weak functions of source size up to \sim 1\kpc. This is also true for the magnification ratios of widely separated images in the fold and cusp caustic lenses. This means that selected magnification ratios for different the emission regions of a lensed quasar should agree with each other, barring microlensing by stars. The source size dependence of the magnification ratio between the closest pair of images is more sensitive to substructure.Comment: 28 pages, 2 tables and 14 figures. Accepted to MNRA

    Small-scale structures of dark matter and flux anomalies in quasar gravitational lenses

    Get PDF
    We investigate the statistics of flux anomalies in gravitationally lensed quasi-stellar objects as a function of dark matter halo properties such as substructure content and halo ellipticity. We do this by creating a very large number of simulated lenses with finite source sizes to compare with the data. After analysing these simulations, we conclude the following. (1) The finite size of the source is important. The point source approximation commonly used can cause biased results. (2) The widely used Rcusp statistic is sensitive to halo ellipticity as well as the lens' substructure content. (3) For compact substructure, we find new upper bounds on the amount of substructure from the fact that no simple single-galaxy lenses have been observed with a single source having more than four well separated images. (4) The frequency of image flux anomalies is largely dependent on the total surface mass density in substructures and the size-mass relation for the substructures, and not on the range of substructure masses. (5) Substructure models with the same size-mass relation produce similar numbers of flux anomalies even when their internal mass profiles are different. (6) The lack of high image multiplicity lenses puts a limit on a combination of the substructures' size-mass relation, surface density and mass. (7) Substructures with shallower mass profiles and/or larger sizes produce less extra images. (8) The constraints that we are able to measure here with current data are roughly consistent with Λ cold dark matter (ΛCDM) N-body simulation

    Photo-z Performance for Precision Cosmology

    Full text link
    Current and future weak lensing surveys will rely on photometrically estimated redshifts of very large numbers of galaxies. In this paper, we address several different aspects of the demanding photo-z performance that will be required for future experiments, such as the proposed ESA Euclid mission. It is first shown that the proposed all-sky near-infrared photometry from Euclid, in combination with anticipated ground-based photometry (e.g. PanStarrs-2 or DES) should yield the required precision in individual photo-z of sigma(z) < 0.05(1+z) at I_AB < 24.5. Simple a priori rejection schemes based on the photometry alone can be tuned to recognise objects with wildly discrepant photo-z and to reduce the outlier fraction to < 0.25% with only modest loss of otherwise usable objects. Turning to the more challenging problem of determining the mean redshift of a set of galaxies to a precision of 0.002(1+z) we argue that, for many different reasons, this is best accomplished by relying on the photo-z themselves rather than on the direct measurement of from spectroscopic redshifts of a representative subset of the galaxies. A simple adaptive scheme based on the statistical properties of the photo-z likelihood functions is shown to meet this stringent systematic requirement. We also examine the effect of an imprecise correction for Galactic extinction and the effects of contamination by fainter over-lapping objects in photo-z determination. The overall conclusion of this work is that the acquisition of photometrically estimated redshifts with the precision required for Euclid, or other similar experiments, will be challenging but possible. (abridged)Comment: 16 pages, 11 figures; submitted to MNRA
    • …
    corecore