1,078 research outputs found

    A PCA-based automated finder for galaxy-scale strong lenses

    Get PDF
    We present an algorithm using Principal Component Analysis (PCA) to subtract galaxies from imaging data, and also two algorithms to find strong, galaxy-scale gravitational lenses in the resulting residual image. The combined method is optimized to find full or partial Einstein rings. Starting from a pre-selection of potential massive galaxies, we first perform a PCA to build a set of basis vectors. The galaxy images are reconstructed using the PCA basis and subtracted from the data. We then filter the residual image with two different methods. The first uses a curvelet (curved wavelets) filter of the residual images to enhance any curved/ring feature. The resulting image is transformed in polar coordinates, centered on the lens galaxy center. In these coordinates, a ring is turned into a line, allowing us to detect very faint rings by taking advantage of the integrated signal-to-noise in the ring (a line in polar coordinates). The second way of analysing the PCA-subtracted images identifies structures in the residual images and assesses whether they are lensed images according to their orientation, multiplicity and elongation. We apply the two methods to a sample of simulated Einstein rings, as they would be observed with the ESA Euclid satellite in the VIS band. The polar coordinates transform allows us to reach a completeness of 90% and a purity of 86%, as soon as the signal-to-noise integrated in the ring is higher than 30, and almost independent of the size of the Einstein ring. Finally, we show with real data that our PCA-based galaxy subtraction scheme performs better than traditional subtraction based on model fitting to the data. Our algorithm can be developed and improved further using machine learning and dictionary learning methods, which would extend the capabilities of the method to more complex and diverse galaxy shapes

    Data Deluge in Astrophysics: Photometric Redshifts as a Template Use Case

    Get PDF
    Astronomy has entered the big data era and Machine Learning based methods have found widespread use in a large variety of astronomical applications. This is demonstrated by the recent huge increase in the number of publications making use of this new approach. The usage of machine learning methods, however is still far from trivial and many problems still need to be solved. Using the evaluation of photometric redshifts as a case study, we outline the main problems and some ongoing efforts to solve them.Comment: 13 pages, 3 figures, Springer's Communications in Computer and Information Science (CCIS), Vol. 82

    A test of the Suyama-Yamaguchi inequality from weak lensing

    Get PDF
    We investigate the weak lensing signature of primordial non-Gaussianities of the local type by constraining the magnitude of the weak convergence bi- and trispectra expected for the EUCLID weak lensing survey. Starting from expressions for the weak convergence spectra, bispectra and trispectra, whose relative magnitudes we investigate as a function of scale, we compute their respective signal to noise ratios by relating the polyspectra's amplitude to their Gaussian covariance using a Monte-Carlo technique for carrying out the configuration space integrations. In computing the Fisher-matrix on the non-Gaussianity parameters f_nl, g_nl and tau_nl with a very similar technique, we can derive Bayesian evidences for a violation of the Suyama-Yamaguchi relation tau_nl>=(6 f_nl/5)^2 as a function of the true f_nl and tau_nl-values and show that the relation can be probed down to levels of f_nl~10^2 and tau_nl~10^5. In a related study, we derive analytical expressions for the probability density that the SY-relation is exactly fulfilled, as required by models in which any one field generates the perturbations. We conclude with an outlook on the levels of non-Gaussianity that can be probed with tomographic lensing surveys

    Information-theoretic Physical Layer Security for Satellite Channels

    Full text link
    Shannon introduced the classic model of a cryptosystem in 1949, where Eve has access to an identical copy of the cyphertext that Alice sends to Bob. Shannon defined perfect secrecy to be the case when the mutual information between the plaintext and the cyphertext is zero. Perfect secrecy is motivated by error-free transmission and requires that Bob and Alice share a secret key. Wyner in 1975 and later I.~Csisz\'ar and J.~K\"orner in 1978 modified the Shannon model assuming that the channels are noisy and proved that secrecy can be achieved without sharing a secret key. This model is called wiretap channel model and secrecy capacity is known when Eve's channel is noisier than Bob's channel. In this paper we review the concept of wiretap coding from the satellite channel viewpoint. We also review subsequently introduced stronger secrecy levels which can be numerically quantified and are keyless unconditionally secure under certain assumptions. We introduce the general construction of wiretap coding and analyse its applicability for a typical satellite channel. From our analysis we discuss the potential of keyless information theoretic physical layer security for satellite channels based on wiretap coding. We also identify system design implications for enabling simultaneous operation with additional information theoretic security protocols
    • …
    corecore