54 research outputs found

    QUEST Hierarchy for Hyperspectral Face Recognition

    Get PDF
    Face recognition is an attractive biometric due to the ease in which photographs of the human face can be acquired and processed. The non-intrusive ability of many surveillance systems permits face recognition applications to be used in a myriad of environments. Despite decades of impressive research in this area, face recognition still struggles with variations in illumination, pose and expression not to mention the larger challenge of willful circumvention. The integration of supporting contextual information in a fusion hierarchy known as QUalia Exploitation of Sensor Technology (QUEST) is a novel approach for hyperspectral face recognition that results in performance advantages and a robustness not seen in leading face recognition methodologies. This research demonstrates a method for the exploitation of hyperspectral imagery and the intelligent processing of contextual layers of spatial, spectral, and temporal information. This approach illustrates the benefit of integrating spatial and spectral domains of imagery for the automatic extraction and integration of novel soft features (biometric). The establishment of the QUEST methodology for face recognition results in an engineering advantage in both performance and efficiency compared to leading and classical face recognition techniques. An interactive environment for the testing and expansion of this recognition framework is also provided

    Algorithm Development for Hyperspectral Anomaly Detection

    Get PDF
    This dissertation proposes and evaluates a novel anomaly detection algorithm suite for ground-to-ground, or air-to-ground, applications requiring automatic target detection using hyperspectral (HS) data. Targets are manmade objects in natural background clutter under unknown illumination and atmospheric conditions. The use of statistical models herein is purely for motivation of particular formulas for calculating anomaly output surfaces. In particular, formulas from semiparametrics are utilized to obtain novel forms for output surfaces, and alternative scoring algorithms are proposed to calculate output surfaces that are comparable to those of semiparametrics. Evaluation uses both simulated data and real HS data from a joint data collection effort between the Army Research Laboratory and the Army Armament Research Development & Engineering Center. A data transformation method is presented for use by the two-sample data structure univariate semiparametric and nonparametric scoring algorithms, such that, the two-sample data are mapped from their original multivariate space to an univariate domain, where the statistical power of the univariate scoring algorithms is shown to be improved relative to existing multivariate scoring algorithms testing the same two-sample data. An exhaustive simulation experimental study is conducted to assess the performance of different HS anomaly detection techniques, where the null and alternative hypotheses are completely specified, including all parameters, using multivariate normal and mixtures of multivariate normal distributions. Finally, for ground-to-ground anomaly detection applications, where the unknown scales of targets add to the problem complexity, a novel global anomaly detection algorithm suite is introduced, featuring autonomous partial random sampling (PRS) of the data cube. The PRS method is proposed to automatically sample the unknown background clutter in the test HS imagery, and by repeating multiple times this process, one can achieve a desirably low cumulative probability of taking target samples by chance and using them as background samples. This probability is modeled by the binomial distribution family, where the only target related parameter--the proportion of target pixels potentially covering the imagery--is shown to be robust. PRS requires a suitable scoring algorithm to compare samples, although applying PRS with the new two-step univariate detectors is shown to outperform existing multivariate detectors

    Weak lensing magnification in SpARCS

    Get PDF
    The growth of large scale structure is one of the fundamental predictions of any cosmological model. Galaxy clusters are the highest peaks in the cosmological matter density field and therefore of prime importance in cosmology. The calibration of the high-redshift (z > 1) galaxy cluster mass-richness relation is particularly important as it contains information about galaxy clusters in their assembly phase, when assumptions such as virial/hydrostatic equilibrium might not be valid. Measuring the mass-richness relation over a wide range in redshift will help to better understand the astrophysics of clusters over time and simultaneously provide cosmological structure growth constraints. An independent and novel method to acquire this information is the use of the weak gravitational lensing magnification effect, which is able to accurately measure the masses of large samples of high-z clusters in a statistical way (i.e. through stacking) without the need to resolve background galaxies. This magnification effect leads to a change of source counts which then can be analysed by measuring the angular cross-correlation function of optically selected Lyman-break galaxies and highredshift clusters. We apply this method to the hundreds of new high-z galaxy clusters found in the SpARCS (Spitzer Adaptation of the Red-Sequence Cluster Survey) infrared survey, observed also with the CFHT in the optical ugrz-bands. We measure the cross-correlation between the positions of galaxy cluster candidates and LBGs and detect a weak lensing magnification signal for all bins at a detection significance of 2.6-5.5 sigma. In particular, the significance of the measurement for clusters with z > 1 is 4.1 sigma; for the entire cluster sample we obtain an average M200 of 1.28 +0.23/-0.21 10^14 solar masses. Our measurements demonstrate the feasibility of using weak lensing magnification as a viable tool for determining the average halo masses for samples of high redshift galaxy clusters. The results also establish the success of using galaxy over-densities to select massive clusters at z > 1. Additional studies are necessary for further modelling of the various systematic effects we discussed

    Theories of gravitation confronted with cosmological observations

    Get PDF
    [eng] This thesis studies Dark Energy, a central topic of Modern Cosmology, from different points of view. It covers from models and parametrizations to likelihood approximations for the observations of the Large Scale Structure (LSS) of the Universe, which are vital for model constraining, as well as for our study of the S8 tension. The studied model is the α-attractors dark energy model, which reproduces current observations thanks to its capability to mimic a cosmological constant. It is inspired on the Planck-favored inflationary class of models α-attractors and links the inflationary and dark energy periods with the same scalar field. Fortunately, next-generation surveys might be able to distinguish it from ΛCDM by up to 3σ of confidence. Nevertheless, in order to make the most of forthcoming data, we need a formalism that allows us to test models in an efficient way. This could be a observational and theoretical constrained parametrization. We show the case of w0 -wa for thawing quintessence. This parametrization is able to reproduce the observables from recombination to the present when chosen smartly. In addition, their probability distributions built from purely theoretical considerations allow us to restrict the observational bounds to the theoretically motivated area. This work shows how we can work with very general theories such as Horndeski. However, the observational constraints also depend on the correct estimation of their likelihood. Therefore, we have developed (and implemented in NaMaster, from LSST DESC) the Narrow Kernel Approximation (NKA) for the Gaussian (not causally connected) part of the LSS pseudo-Cl covariance matrix. This allows to reproduce accurately the cosmological parameters’ posterior distribution while reducing the computational cost from O(lmax6) (exact computation) to O(lmax3). For sure, the exact computation will be intractable for the next-generation LSS observations, which will resolve really small scales. The NKA has allowed us to start our study of the origin of the S8 tension reported by KiDS-450. With data of LSS from DES and data of CMB from Planck, we want to obtain the temporal evolution of S8 and compare it with that given by KiDS-450 data. Summing up, this thesis has covered dark energy models and developed accurate methods to test their viability in an efficient way. Furthermore, it has gone closer to data and developed the NKA that allows to accurately estimate the LSS likelihood, making it possible for us to study the origin of the S8 tension.[spa] Esta tesis estudia la energía oscura, un tema central en la Cosmología Moderna, desde diferentes puntos de vista. Cubre desde modelos y parametrizaciones, hasta aproximaciones de la likelihood de observaciones de la Estructura a Gran Escala del Universo (LSS), vitales para constreñir los modelos, como para nuestro estudio de la tensión en S8. El modelo estudiado es el de energía oscura de α-attractors, que reproduce las observaciones actuales al poder asemejarse a una constante cosmológica. Inspirado en la clase de modelos de inflación favorecidos por Planck, α-attractors, enlaza el periodo de inflación y energía oscura mediante el mismo campo escalar. Afortunadamente, las misiones futuras podrían diferenciarlo de ΛCDM con hasta 3σ de confianza. No obstante, aprovechar al máximo los datos futuros requiere un formalismo que nos permita comprobar la viabilidad de los modelos eficientemente, como una parametrización acotada observacional y teóricamente. Mostramos el caso de w0-wa para quintaesencia de tipo thawing, que reproduce los observables desde recombinación hasta hoy, si se eligen apropiadamente. Construidas sus distribuciones de probabilidad bajo condiciones puramente teóricas, nos permiten restringir las cotas observacionales a la zona teóricamente motivada. Este trabajo es una muestra de cómo podemos trabajar con teorías muy generales como Horndeski. Pero las cotas observacionales dependen también de una correcta estimación de su likelihood. Por ello, hemos desarrollado (e implementado en NaMaster, del LSST DESC) la Narrow Kernel Approximation (NKA) para la parte gausiana (causalmente no conectada) de la covarianza de los pseudo-Cl de LSS. Ésta permite obtener con exactitud las cotas en los parámetros cosmológicos, al tiempo que reduce el coste computacional de O(lmax⁶) (cómputo exacto) a O(lmax3). Claramente, el cómputo exacto es inviable para las observaciones futuras de LSS que resolverán escalas muy pequeñas. La NKA nos ha permitido empezar a estudiar el origen de la tensión en S8 reportada por KiDS-450. Con datos de LSS de DES y del CMB de Planck, pretendemos obtener la evolución temporal de S8 y compararla con la dada por los datos de KiDS-450. En resumen, esta tesis ha trabajado en modelos de energía oscura y desarrollado métodos precisos para comprobar su viabilidad de manera eficiente. Además, se ha acercado a los datos y desarrollado la NKA que permite estimar precisamente la likelihood para LSS; permitiéndonos estudiar el origen de la tensión en el parámetro S8

    Cosmology intertwined: a review of the particle physics, astrophysics, and cosmology associated with the cosmological tensions and anomalies

    Get PDF
    The standard ¿ Cold Dark Matter (¿CDM) cosmological model provides a good description of a wide range of astrophysical and cosmological data. However, there are a few big open questions that make the standard model look like an approximation to a more realistic scenario yet to be found. In this paper, we list a few important goals that need to be addressed in the next decade, taking into account the current discordances between the different cosmological probes, such as the disagreement in the value of the Hubble constant , the – tension, and other less statistically significant anomalies. While these discordances can still be in part the result of systematic errors, their persistence after several years of accurate analysis strongly hints at cracks in the standard cosmological scenario and the necessity for new physics or generalisations beyond the standard model. In this paper, we focus on the tension between the Planck CMB estimate of the Hubble constant and the SH0ES collaboration measurements. After showing the evaluations made from different teams using different methods and geometric calibrations, we list a few interesting new physics models that could alleviate this tension and discuss how the next decade's experiments will be crucial. Moreover, we focus on the tension of the Planck CMB data with weak lensing measurements and redshift surveys, about the value of the matter energy density , and the amplitude or rate of the growth of structure (). We list a few interesting models proposed for alleviating this tension, and we discuss the importance of trying to fit a full array of data with a single model and not just one parameter at a time. Additionally, we present a wide range of other less discussed anomalies at a statistical significance level lower than the – tensions which may also constitute hints towards new physics, and we discuss possible generic theoretical approaches that can collectively explain the non-standard nature of these signals. Finally, we give an overview of upgraded experiments and next-generation space missions and facilities on Earth that will be of crucial importance to address all these open questionsPeer ReviewedArticle signat per 202 autors/esPostprint (published version

    Camera systematics and three-point correlations in modern photometric galaxy surveys

    Get PDF
    The goal of modern cosmology, broadly speaking, is to understand the behavior of the Universe at large scales, including the evolution of dark matter and dark energy over cosmic time. In the context of the modern paradigm of a universe dominated by dark energy and cold dark matter (LCDM), the goal is to detect deviations from LCDM predictions (new physics), and in the absence of those, to infer the value of the LCDM parameters. Advances this endeavor will require both improved constraints on systematic errors in raw astronomical data as well as improved statistical methods for extracting cosmological information from galaxy catalogs. Toward these ends, the first half of this thesis discusses methods for improving our ability to make precise and accurate measurements of galaxies in the universe using astronomical CCD imaging cameras. The second half of this thesis discusses a novel application of a statistical probe of the cosmic web of dark matter, the galaxy three-point correlation function, to photometric galaxy surveys, that allows us to extract more information of cosmological interest from the observed galaxy distribution. Both lines of research discussed in this thesis will be useful in future analyses of data from upcoming optical galaxy surveys, including the Large Synoptic Survey Telescope

    Remote Sensing

    Get PDF
    This dual conception of remote sensing brought us to the idea of preparing two different books; in addition to the first book which displays recent advances in remote sensing applications, this book is devoted to new techniques for data processing, sensors and platforms. We do not intend this book to cover all aspects of remote sensing techniques and platforms, since it would be an impossible task for a single volume. Instead, we have collected a number of high-quality, original and representative contributions in those areas
    corecore