37,560 research outputs found
TreeGrad: Transferring Tree Ensembles to Neural Networks
Gradient Boosting Decision Tree (GBDT) are popular machine learning
algorithms with implementations such as LightGBM and in popular machine
learning toolkits like Scikit-Learn. Many implementations can only produce
trees in an offline manner and in a greedy manner. We explore ways to convert
existing GBDT implementations to known neural network architectures with
minimal performance loss in order to allow decision splits to be updated in an
online manner and provide extensions to allow splits points to be altered as a
neural architecture search problem. We provide learning bounds for our neural
network.Comment: Technical Report on Implementation of Deep Neural Decision Forests
Algorithm. To accompany implementation here:
https://github.com/chappers/TreeGrad. Update: Please cite as: Siu, C. (2019).
"Transferring Tree Ensembles to Neural Networks". International Conference on
Neural Information Processing. Springer, 2019. arXiv admin note: text overlap
with arXiv:1909.1179
Stacking for machine learning redshifts applied to SDSS galaxies
We present an analysis of a general machine learning technique called
'stacking' for the estimation of photometric redshifts. Stacking techniques can
feed the photometric redshift estimate, as output by a base algorithm, back
into the same algorithm as an additional input feature in a subsequent learning
round. We shown how all tested base algorithms benefit from at least one
additional stacking round (or layer). To demonstrate the benefit of stacking,
we apply the method to both unsupervised machine learning techniques based on
self-organising maps (SOMs), and supervised machine learning methods based on
decision trees. We explore a range of stacking architectures, such as the
number of layers and the number of base learners per layer. Finally we explore
the effectiveness of stacking even when using a successful algorithm such as
AdaBoost. We observe a significant improvement of between 1.9% and 21% on all
computed metrics when stacking is applied to weak learners (such as SOMs and
decision trees). When applied to strong learning algorithms (such as AdaBoost)
the ratio of improvement shrinks, but still remains positive and is between
0.4% and 2.5% for the explored metrics and comes at almost no additional
computational cost.Comment: 13 pages, 3 tables, 7 figures version accepted by MNRAS, minor text
updates. Results and conclusions unchange
- …