41,710 research outputs found
Machine Learning Techniques for Stellar Light Curve Classification
We apply machine learning techniques in an attempt to predict and classify
stellar properties from noisy and sparse time series data. We preprocessed over
94 GB of Kepler light curves from MAST to classify according to ten distinct
physical properties using both representation learning and feature engineering
approaches. Studies using machine learning in the field have been primarily
done on simulated data, making our study one of the first to use real light
curve data for machine learning approaches. We tuned our data using previous
work with simulated data as a template and achieved mixed results between the
two approaches. Representation learning using a Long Short-Term Memory (LSTM)
Recurrent Neural Network (RNN) produced no successful predictions, but our work
with feature engineering was successful for both classification and regression.
In particular, we were able to achieve values for stellar density, stellar
radius, and effective temperature with low error (~ 2 - 4%) and good accuracy
(~ 75%) for classifying the number of transits for a given star. The results
show promise for improvement for both approaches upon using larger datasets
with a larger minority class. This work has the potential to provide a
foundation for future tools and techniques to aid in the analysis of
astrophysical data.Comment: Accepted to The Astronomical Journa
Visualising Basins of Attraction for the Cross-Entropy and the Squared Error Neural Network Loss Functions
Quantification of the stationary points and the associated basins of
attraction of neural network loss surfaces is an important step towards a
better understanding of neural network loss surfaces at large. This work
proposes a novel method to visualise basins of attraction together with the
associated stationary points via gradient-based random sampling. The proposed
technique is used to perform an empirical study of the loss surfaces generated
by two different error metrics: quadratic loss and entropic loss. The empirical
observations confirm the theoretical hypothesis regarding the nature of neural
network attraction basins. Entropic loss is shown to exhibit stronger gradients
and fewer stationary points than quadratic loss, indicating that entropic loss
has a more searchable landscape. Quadratic loss is shown to be more resilient
to overfitting than entropic loss. Both losses are shown to exhibit local
minima, but the number of local minima is shown to decrease with an increase in
dimensionality. Thus, the proposed visualisation technique successfully
captures the local minima properties exhibited by the neural network loss
surfaces, and can be used for the purpose of fitness landscape analysis of
neural networks.Comment: Preprint submitted to the Neural Networks journa
- …