410 research outputs found
deep-REMAP: Parameterization of Stellar Spectra Using Regularized Multi-Task Learning
Traditional spectral analysis methods are increasingly challenged by the
exploding volumes of data produced by contemporary astronomical surveys. In
response, we develop deep-Regularized Ensemble-based Multi-task Learning with
Asymmetric Loss for Probabilistic Inference (), a novel
framework that utilizes the rich synthetic spectra from the PHOENIX library and
observational data from the MARVELS survey to accurately predict stellar
atmospheric parameters. By harnessing advanced machine learning techniques,
including multi-task learning and an innovative asymmetric loss function,
demonstrates superior predictive capabilities in determining
effective temperature, surface gravity, and metallicity from observed spectra.
Our results reveal the framework's effectiveness in extending to other stellar
libraries and properties, paving the way for more sophisticated and automated
techniques in stellar characterization.Comment: 5 main pages + 2 figures. Accepted to the ML4PS workshop at NeurIPS
202
Self-Paced Multitask Learning with Shared Knowledge
This paper introduces self-paced task selection to multitask learning, where
instances from more closely related tasks are selected in a progression of
easier-to-harder tasks, to emulate an effective human education strategy, but
applied to multitask machine learning. We develop the mathematical foundation
for the approach based on iterative selection of the most appropriate task,
learning the task parameters, and updating the shared knowledge, optimizing a
new bi-convex loss function. This proposed method applies quite generally,
including to multitask feature learning, multitask learning with alternating
structure optimization, etc. Results show that in each of the above
formulations self-paced (easier-to-harder) task selection outperforms the
baseline version of these methods in all the experiments
- …