35,374 research outputs found
Deep Learning Applied to the Asteroseismic Modeling of Stars with Coherent Oscillation Modes
We develop a novel method based on machine learning principles to achieve
optimal initiation of CPU-intensive computations for forward asteroseismic
modeling in a multi-D parameter space. A deep neural network is trained on a
precomputed asteroseismology grid containing about 62 million coherent
oscillation-mode frequencies derived from stellar evolution models. These
models are representative of the core-hydrogen burning stage of
intermediate-mass and high-mass stars. The evolution models constitute a 6D
parameter space and their predicted low-degree pressure- and gravity-mode
oscillations are scanned, using a genetic algorithm. A software pipeline is
created to find the best fitting stellar parameters for a given set of observed
oscillation frequencies. The proposed method finds the optimal regions in the
6D parameters space in less than a minute, hence providing the optimal starting
point for further and more detailed forward asteroseismic modeling in a
high-dimensional context. We test and apply the method to seven pulsating stars
that were previously modeled asteroseismically by classical grid-based forward
modeling based on a statistic and obtain good agreement with past
results. Our deep learning methodology opens up the application of
asteroseismic modeling in +6D parameter space for thousands of stars pulsating
in coherent modes with long lifetimes observed by the space telescope
and to be discovered with the TESS and PLATO space missions, while applications
so far were done star-by-star for only a handful of cases. Our method is open
source and can be used by anyone freely.Comment: Accepted for publication in PASP Speciale Volume on Machine Learnin
Do GANs leave artificial fingerprints?
In the last few years, generative adversarial networks (GAN) have shown
tremendous potential for a number of applications in computer vision and
related fields. With the current pace of progress, it is a sure bet they will
soon be able to generate high-quality images and videos, virtually
indistinguishable from real ones. Unfortunately, realistic GAN-generated images
pose serious threats to security, to begin with a possible flood of fake
multimedia, and multimedia forensic countermeasures are in urgent need. In this
work, we show that each GAN leaves its specific fingerprint in the images it
generates, just like real-world cameras mark acquired images with traces of
their photo-response non-uniformity pattern. Source identification experiments
with several popular GANs show such fingerprints to represent a precious asset
for forensic analyses
Photometric Confirmation of MACHO Large Magellanic Cloud Microlensing Events
We present previously unpublished photometry of three Large Magellanic Cloud
(LMC) microlensing events and show that the new photometry confirms the
microlensing interpretation of these events. These events were discovered by
the MACHO Project alert system and were also recovered by the analysis of the
5.7 year MACHO data set. This new photometry provides a substantial increase in
the signal-to-noise ratio over the previously published photometry and in all
three cases, the gravitational microlensing interpretation of these events is
strengthened. The new data consist of MACHO-Global Microlensing Alert Network
(GMAN) follow-up images from the CTIO 0.9 telescope plus difference imaging
photometry of the original MACHO data from the 1.3m "Great Melbourne" telescope
at Mt. Stromlo. We also combine microlensing light curve fitting with
photometry from high resolution HST images of the source stars to provide
further confirmation of these events and to show that the microlensing
interpretation of event MACHO-LMC-23 is questionable. Finally, we compare our
results with the analysis of Belokurov, Evans & Le Du who have attempted to
classify candidate microlensing events with a neural network method, and we
find that their results are contradicted by the new data and more powerful
light curve fitting analysis for each of the four events considered in this
paper. The failure of the Belokurov, Evans & Le Du method is likely to be due
to their use of a set of insensitive statistics to feed their neural networks.Comment: 29 pages with 8 included postscript figures, accepted by the
Astrophysical Journa
StarGO: A New Method to Identify the Galactic Origins of Halo Stars
We develop a new method StarGO (Stars' Galactic Origin) to identify the
galactic origins of halo stars using their kinematics. Our method is based on
self-organizing map (SOM), which is one of the most popular unsupervised
learning algorithms. StarGO combines SOM with a novel adaptive group
identification algorithm with essentially no free parameters. In order to
evaluate our model, we build a synthetic stellar halo from mergers of nine
satellites in the Milky Way. We construct the mock catalogue by extracting a
heliocentric volume of 10 kpc from our simulations and assigning expected
observational uncertainties corresponding to bright stars from Gaia DR2 and
LAMOST DR5. We compare the results from StarGO against that from a
Friends-of-Friends (FoF) based method in the space of orbital energy and
angular momentum. We show that StarGO is able to systematically identify more
satellites and achieve higher number fraction of identified stars for most of
the satellites within the extracted volume. When applied to data from Gaia DR2,
StarGO will enable us to reveal the origins of the inner stellar halo in
unprecedented detail.Comment: 11 pages, 7 figures, Accepted for publication in Ap
A Learning Algorithm based on High School Teaching Wisdom
A learning algorithm based on primary school teaching and learning is
presented. The methodology is to continuously evaluate a student and to give
them training on the examples for which they repeatedly fail, until, they can
correctly answer all types of questions. This incremental learning procedure
produces better learning curves by demanding the student to optimally dedicate
their learning time on the failed examples. When used in machine learning, the
algorithm is found to train a machine on a data with maximum variance in the
feature space so that the generalization ability of the network improves. The
algorithm has interesting applications in data mining, model evaluations and
rare objects discovery
Classifying the unknown: discovering novel gravitational-wave detector glitches using similarity learning
The observation of gravitational waves from compact binary coalescences by
LIGO and Virgo has begun a new era in astronomy. A critical challenge in making
detections is determining whether loud transient features in the data are
caused by gravitational waves or by instrumental or environmental sources. The
citizen-science project \emph{Gravity Spy} has been demonstrated as an
efficient infrastructure for classifying known types of noise transients
(glitches) through a combination of data analysis performed by both citizen
volunteers and machine learning. We present the next iteration of this project,
using similarity indices to empower citizen scientists to create large data
sets of unknown transients, which can then be used to facilitate supervised
machine-learning characterization. This new evolution aims to alleviate a
persistent challenge that plagues both citizen-science and instrumental
detector work: the ability to build large samples of relatively rare events.
Using two families of transient noise that appeared unexpectedly during LIGO's
second observing run (O2), we demonstrate the impact that the similarity
indices could have had on finding these new glitch types in the Gravity Spy
program
- …