107 research outputs found

    The expected performance of stellar parametrization with Gaia spectrophotometry

    Full text link
    Gaia will obtain astrometry and spectrophotometry for essentially all sources in the sky down to a broad band magnitude limit of G=20, an expected yield of 10^9 stars. Its main scientific objective is to reveal the formation and evolution of our Galaxy through chemo-dynamical analysis. In addition to inferring positions, parallaxes and proper motions from the astrometry, we must also infer the astrophysical parameters of the stars from the spectrophotometry, the BP/RP spectrum. Here we investigate the performance of three different algorithms (SVM, ILIUM, Aeneas) for estimating the effective temperature, line-of-sight interstellar extinction, metallicity and surface gravity of A-M stars over a wide range of these parameters and over the full magnitude range Gaia will observe (G=6-20mag). One of the algorithms, Aeneas, infers the posterior probability density function over all parameters, and can optionally take into account the parallax and the Hertzsprung-Russell diagram to improve the estimates. For all algorithms the accuracy of estimation depends on G and on the value of the parameters themselves, so a broad summary of performance is only approximate. For stars at G=15 with less than two magnitudes extinction, we expect to be able to estimate Teff to within 1%, logg to 0.1-0.2dex, and [Fe/H] (for FGKM stars) to 0.1-0.2dex, just using the BP/RP spectrum (mean absolute error statistics are quoted). Performance degrades at larger extinctions, but not always by a large amount. Extinction can be estimated to an accuracy of 0.05-0.2mag for stars across the full parameter range with a priori unknown extinction between 0 and 10mag. Performance degrades at fainter magnitudes, but even at G=19 we can estimate logg to better than 0.2dex for all spectral types, and [Fe/H] to within 0.35dex for FGKM stars, for extinctions below 1mag.Comment: MNRAS, in press. Minor corrections made in v

    The expected performance of stellar parametrization with Gaia spectrophotometry

    Get PDF
    Gaia will obtain astrometry and spectrophotometry for essentially all sources in the sky down to a broad band magnitude limit of G=20, an expected yield of 10^9 stars. Its main scientific objective is to reveal the formation and evolution of our Galaxy through chemo-dynamical analysis. In addition to inferring positions, parallaxes and proper motions from the astrometry, we must also infer the astrophysical parameters of the stars from the spectrophotometry, the BP/RP spectrum. Here we investigate the performance of three different algorithms (SVM, ILIUM, Aeneas) for estimating the effective temperature, line-of-sight interstellar extinction, metallicity and surface gravity of A-M stars over a wide range of these parameters and over the full magnitude range Gaia will observe (G=6-20mag). One of the algorithms, Aeneas, infers the posterior probability density function over all parameters, and can optionally take into account the parallax and the Hertzsprung-Russell diagram to improve the estimates. For all algorithms the accuracy of estimation depends on G and on the value of the parameters themselves, so a broad summary of performance is only approximate. For stars at G=15 with less than two magnitudes extinction, we expect to be able to estimate Teff to within 1%, logg to 0.1-0.2dex, and [Fe/H] (for FGKM stars) to 0.1-0.2dex, just using the BP/RP spectrum (mean absolute error statistics are quoted). Performance degrades at larger extinctions, but not always by a large amount. Extinction can be estimated to an accuracy of 0.05-0.2mag for stars across the full parameter range with a priori unknown extinction between 0 and 10mag. Performance degrades at fainter magnitudes, but even at G=19 we can estimate logg to better than 0.2dex for all spectral types, and [Fe/H] to within 0.35dex for FGKM stars, for extinctions below 1mag

    DeepSphere: Efficient spherical Convolutional Neural Network with HEALPix sampling for cosmological applications

    Full text link
    Convolutional Neural Networks (CNNs) are a cornerstone of the Deep Learning toolbox and have led to many breakthroughs in Artificial Intelligence. These networks have mostly been developed for regular Euclidean domains such as those supporting images, audio, or video. Because of their success, CNN-based methods are becoming increasingly popular in Cosmology. Cosmological data often comes as spherical maps, which make the use of the traditional CNNs more complicated. The commonly used pixelization scheme for spherical maps is the Hierarchical Equal Area isoLatitude Pixelisation (HEALPix). We present a spherical CNN for analysis of full and partial HEALPix maps, which we call DeepSphere. The spherical CNN is constructed by representing the sphere as a graph. Graphs are versatile data structures that can act as a discrete representation of a continuous manifold. Using the graph-based representation, we define many of the standard CNN operations, such as convolution and pooling. With filters restricted to being radial, our convolutions are equivariant to rotation on the sphere, and DeepSphere can be made invariant or equivariant to rotation. This way, DeepSphere is a special case of a graph CNN, tailored to the HEALPix sampling of the sphere. This approach is computationally more efficient than using spherical harmonics to perform convolutions. We demonstrate the method on a classification problem of weak lensing mass maps from two cosmological models and compare the performance of the CNN with that of two baseline classifiers. The results show that the performance of DeepSphere is always superior or equal to both of these baselines. For high noise levels and for data covering only a smaller fraction of the sphere, DeepSphere achieves typically 10% better classification accuracy than those baselines. Finally, we show how learned filters can be visualized to introspect the neural network.Comment: arXiv admin note: text overlap with arXiv:astro-ph/0409513 by other author

    Fast emulation of cosmological density fields based on dimensionality reduction and supervised machine-learning

    Full text link
    N-body simulations are the most powerful method to study the non-linear evolution of large-scale structure. However, they require large amounts of computational resources, making unfeasible their direct adoption in scenarios that require broad explorations of parameter spaces. In this work, we show that it is possible to perform fast dark matter density field emulations with competitive accuracy using simple machine-learning approaches. We build an emulator based on dimensionality reduction and machine learning regression combining simple Principal Component Analysis and supervised learning methods. For the estimations with a single free parameter, we train on the dark matter density parameter, Ωm\Omega_m, while for emulations with two free parameters, we train on a range of Ωm\Omega_m and redshift. The method first adopts a projection of a grid of simulations on a given basis; then, a machine learning regression is trained on this projected grid. Finally, new density cubes for different cosmological parameters can be estimated without relying directly on new N-body simulations by predicting and de-projecting the basis coefficients. We show that the proposed emulator can generate density cubes at non-linear cosmological scales with density distributions within a few percent compared to the corresponding N-body simulations. The method enables gains of three orders of magnitude in CPU run times compared to performing a full N-body simulation while reproducing the power spectrum and bispectrum within 1%\sim 1\% and 3%\sim 3\%, respectively, for the single free parameter emulation and 5%\sim 5\% and 15%\sim 15\% for two free parameters. This can significantly accelerate the generation of density cubes for a wide variety of cosmological models, opening the doors to previously unfeasible applications, such as parameter and model inferences at full survey scales as the ESA/NASA Euclid mission.Comment: 10 pages, 6 figures. To be submitted to A&A. Comments are welcome

    Aspects of kernel based learning algorithms

    Get PDF

    Motion-Augmented Inference and Joint Kernels in Structured Learning for Object Tracking and Integration with Object Segmentation

    Get PDF
    Video object tracking is a fundamental task of continuously following an object of interest in a video sequence. It has attracted considerable attention in both academia and industry due to its diverse applications, such as in automated video surveillance, augmented and virtual reality, medical, automated vehicle navigation and tracking, and smart devices. Challenges in video object tracking arise from occlusion, deformation, background clutter, illumination variation, fast object motion, scale variation, low resolution, rotation, out-of-view, and motion blur. Object tracking remains, therefore, as an active research field. This thesis explores improving object tracking by employing 1) advanced techniques in machine learning theory to account for intrinsic changes in the object appearance under those challenging conditions, and 2) object segmentation. More specifically, we propose a fast and competitive method for object tracking by modeling target dynamics as a random stochastic process, and using structured support vector machines. First, we predict target dynamics by harmonic means and particle filter in which we exploit kernel machines to derive a new entropy based observation likelihood distribution. Second, we employ online structured support vector machines to model object appearance, where we analyze responses of several kernel functions for various feature descriptors and study how such kernels can be optimally combined to formulate a single joint kernel function. During learning, we develop a probability formulation to determine model updates and use sequential minimal optimization-step to solve the structured optimization problem. We gain efficiency improvements in the proposed object tracking by 1) exploiting particle filter for sampling the search space instead of commonly adopted dense sampling strategies, and 2) introducing a motion-augmented regularization term during inference to constrain the output search space. We then extend our baseline tracker to detect tracking failures or inaccuracies and reinitialize itself when needed. To that end, we integrate object segmentation into tracking. First, we use binary support vector machines to develop a technique to detect tracking failures (or inaccuracies) by monitoring internal variables of our baseline tracker. We leverage learned examples from our baseline tracker to train the employed binary support vector machines. Second, we propose an automated method to re-initialize the tracker to recover from tracking failures by integrating an active contour based object segmentation and using particle filter to sample bounding boxes for segmentation. Through extensive experiments on standard video datasets, we subjectively and objectively demonstrate that both our baseline and extended methods strongly compete against state-of-the-art object tracking methods on challenging video conditions

    Fault Diagnosis and Failure Prognostics of Lithium-ion Battery based on Least Squares Support Vector Machine and Memory Particle Filter Framework

    Get PDF
    123456A novel data driven approach is developed for fault diagnosis and remaining useful life (RUL) prognostics for lithium-ion batteries using Least Square Support Vector Machine (LS-SVM) and Memory-Particle Filter (M-PF). Unlike traditional data-driven models for capacity fault diagnosis and failure prognosis, which require multidimensional physical characteristics, the proposed algorithm uses only two variables: Energy Efficiency (EE), and Work Temperature. The aim of this novel framework is to improve the accuracy of incipient and abrupt faults diagnosis and failure prognosis. First, the LSSVM is used to generate residual signal based on capacity fade trends of the Li-ion batteries. Second, adaptive threshold model is developed based on several factors including input, output model error, disturbance, and drift parameter. The adaptive threshold is used to tackle the shortcoming of a fixed threshold. Third, the M-PF is proposed as the new method for failure prognostic to determine Remaining Useful Life (RUL). The M-PF is based on the assumption of the availability of real-time observation and historical data, where the historical failure data can be used instead of the physical failure model within the particle filter. The feasibility of the framework is validated using Li-ion battery prognostic data obtained from the National Aeronautic and Space Administration (NASA) Ames Prognostic Center of Excellence (PCoE). The experimental results show the following: (1) fewer data dimensions for the input data are required compared to traditional empirical models; (2) the proposed diagnostic approach provides an effective way of diagnosing Li-ion battery fault; (3) the proposed prognostic approach can predict the RUL of Li-ion batteries with small error, and has high prediction accuracy; and, (4) the proposed prognostic approach shows that historical failure data can be used instead of a physical failure model in the particle filter
    corecore