59,574 research outputs found
TreeGrad: Transferring Tree Ensembles to Neural Networks
Gradient Boosting Decision Tree (GBDT) are popular machine learning
algorithms with implementations such as LightGBM and in popular machine
learning toolkits like Scikit-Learn. Many implementations can only produce
trees in an offline manner and in a greedy manner. We explore ways to convert
existing GBDT implementations to known neural network architectures with
minimal performance loss in order to allow decision splits to be updated in an
online manner and provide extensions to allow splits points to be altered as a
neural architecture search problem. We provide learning bounds for our neural
network.Comment: Technical Report on Implementation of Deep Neural Decision Forests
Algorithm. To accompany implementation here:
https://github.com/chappers/TreeGrad. Update: Please cite as: Siu, C. (2019).
"Transferring Tree Ensembles to Neural Networks". International Conference on
Neural Information Processing. Springer, 2019. arXiv admin note: text overlap
with arXiv:1909.1179
Three-phase modular permanent magnet brushless machine for torque boosting on a downsized ICE vehicle
The paper describes a relatively new topology of 3-phase permanent magnet (PM) brushless machine, which offers a number of significant advantages over conventional PM brushless machines for automotive applications, such as electrical torque boosting at low engine speeds for vehicles equipped with downsized internal combustion engine (ICEs). The relative merits of feasible slot/pole number combinations for the proposed 3-phase modular PM brushless ac machine are discussed, and an analytical method for establishing the open-circuit and armature reaction magnetic field distributions when such a machine is equipped with a surface-mounted magnet rotor is presented. The results allow the prediction of the torque, the phase emf, and the self- and mutual winding inductances in closed forms, and provide a basis for comparative studies, design optimization and machine dynamic modeling. However, a more robust machine, in terms of improved containment of the magnets, results when the magnets are buried inside the rotor, which, since it introduces a reluctance torque, also serves to reduce the back-emf, the iron loss and the inverter voltage rating. The performance of a modular PM brushless machine equipped with an interior magnet rotor is demonstrated by measurements on a 22-pole/24-slot prototype torque boosting machine
Generating Compact Tree Ensembles via Annealing
Tree ensembles are flexible predictive models that can capture relevant
variables and to some extent their interactions in a compact and interpretable
manner. Most algorithms for obtaining tree ensembles are based on versions of
boosting or Random Forest. Previous work showed that boosting algorithms
exhibit a cyclic behavior of selecting the same tree again and again due to the
way the loss is optimized. At the same time, Random Forest is not based on loss
optimization and obtains a more complex and less interpretable model. In this
paper we present a novel method for obtaining compact tree ensembles by growing
a large pool of trees in parallel with many independent boosting threads and
then selecting a small subset and updating their leaf weights by loss
optimization. We allow for the trees in the initial pool to have different
depths which further helps with generalization. Experiments on real datasets
show that the obtained model has usually a smaller loss than boosting, which is
also reflected in a lower misclassification error on the test set.Comment: Comparison with Random Forest included in the results sectio
Ensemble learning of linear perceptron; Online learning theory
Within the framework of on-line learning, we study the generalization error
of an ensemble learning machine learning from a linear teacher perceptron. The
generalization error achieved by an ensemble of linear perceptrons having
homogeneous or inhomogeneous initial weight vectors is precisely calculated at
the thermodynamic limit of a large number of input elements and shows rich
behavior. Our main findings are as follows. For learning with homogeneous
initial weight vectors, the generalization error using an infinite number of
linear student perceptrons is equal to only half that of a single linear
perceptron, and converges with that of the infinite case with O(1/K) for a
finite number of K linear perceptrons. For learning with inhomogeneous initial
weight vectors, it is advantageous to use an approach of weighted averaging
over the output of the linear perceptrons, and we show the conditions under
which the optimal weights are constant during the learning process. The optimal
weights depend on only correlation of the initial weight vectors.Comment: 14 pages, 3 figures, submitted to Physical Review
Ensemble Learning for Free with Evolutionary Algorithms ?
Evolutionary Learning proceeds by evolving a population of classifiers, from
which it generally returns (with some notable exceptions) the single
best-of-run classifier as final result. In the meanwhile, Ensemble Learning,
one of the most efficient approaches in supervised Machine Learning for the
last decade, proceeds by building a population of diverse classifiers. Ensemble
Learning with Evolutionary Computation thus receives increasing attention. The
Evolutionary Ensemble Learning (EEL) approach presented in this paper features
two contributions. First, a new fitness function, inspired by co-evolution and
enforcing the classifier diversity, is presented. Further, a new selection
criterion based on the classification margin is proposed. This criterion is
used to extract the classifier ensemble from the final population only
(Off-line) or incrementally along evolution (On-line). Experiments on a set of
benchmark problems show that Off-line outperforms single-hypothesis
evolutionary learning and state-of-art Boosting and generates smaller
classifier ensembles
- …