57,259 research outputs found
On Machine-Learned Classification of Variable Stars with Sparse and Noisy Time-Series Data
With the coming data deluge from synoptic surveys, there is a growing need
for frameworks that can quickly and automatically produce calibrated
classification probabilities for newly-observed variables based on a small
number of time-series measurements. In this paper, we introduce a methodology
for variable-star classification, drawing from modern machine-learning
techniques. We describe how to homogenize the information gleaned from light
curves by selection and computation of real-numbered metrics ("feature"),
detail methods to robustly estimate periodic light-curve features, introduce
tree-ensemble methods for accurate variable star classification, and show how
to rigorously evaluate the classification results using cross validation. On a
25-class data set of 1542 well-studied variable stars, we achieve a 22.8%
overall classification error using the random forest classifier; this
represents a 24% improvement over the best previous classifier on these data.
This methodology is effective for identifying samples of specific science
classes: for pulsational variables used in Milky Way tomography we obtain a
discovery efficiency of 98.2% and for eclipsing systems we find an efficiency
of 99.1%, both at 95% purity. We show that the random forest (RF) classifier is
superior to other machine-learned methods in terms of accuracy, speed, and
relative immunity to features with no useful class information; the RF
classifier can also be used to estimate the importance of each feature in
classification. Additionally, we present the first astronomical use of
hierarchical classification methods to incorporate a known class taxonomy in
the classifier, which further reduces the catastrophic error rate to 7.8%.
Excluding low-amplitude sources, our overall error rate improves to 14%, with a
catastrophic error rate of 3.5%.Comment: 23 pages, 9 figure
High-Resolution Road Vehicle Collision Prediction for the City of Montreal
Road accidents are an important issue of our modern societies, responsible
for millions of deaths and injuries every year in the world. In Quebec only, in
2018, road accidents are responsible for 359 deaths and 33 thousands of
injuries. In this paper, we show how one can leverage open datasets of a city
like Montreal, Canada, to create high-resolution accident prediction models,
using big data analytics. Compared to other studies in road accident
prediction, we have a much higher prediction resolution, i.e., our models
predict the occurrence of an accident within an hour, on road segments defined
by intersections. Such models could be used in the context of road accident
prevention, but also to identify key factors that can lead to a road accident,
and consequently, help elaborate new policies.
We tested various machine learning methods to deal with the severe class
imbalance inherent to accident prediction problems. In particular, we
implemented the Balanced Random Forest algorithm, a variant of the Random
Forest machine learning algorithm in Apache Spark. Interestingly, we found that
in our case, Balanced Random Forest does not perform significantly better than
Random Forest.
Experimental results show that 85% of road vehicle collisions are detected by
our model with a false positive rate of 13%. The examples identified as
positive are likely to correspond to high-risk situations. In addition, we
identify the most important predictors of vehicle collisions for the area of
Montreal: the count of accidents on the same road segment during previous
years, the temperature, the day of the year, the hour and the visibility
Predicting time to graduation at a large enrollment American university
The time it takes a student to graduate with a university degree is mitigated
by a variety of factors such as their background, the academic performance at
university, and their integration into the social communities of the university
they attend. Different universities have different populations, student
services, instruction styles, and degree programs, however, they all collect
institutional data. This study presents data for 160,933 students attending a
large American research university. The data includes performance, enrollment,
demographics, and preparation features. Discrete time hazard models for the
time-to-graduation are presented in the context of Tinto's Theory of Drop Out.
Additionally, a novel machine learning method: gradient boosted trees, is
applied and compared to the typical maximum likelihood method. We demonstrate
that enrollment factors (such as changing a major) lead to greater increases in
model predictive performance of when a student graduates than performance
factors (such as grades) or preparation (such as high school GPA).Comment: 28 pages, 11 figure
Induction of Non-Monotonic Logic Programs to Explain Boosted Tree Models Using LIME
We present a heuristic based algorithm to induce \textit{nonmonotonic} logic
programs that will explain the behavior of XGBoost trained classifiers. We use
the technique based on the LIME approach to locally select the most important
features contributing to the classification decision. Then, in order to explain
the model's global behavior, we propose the LIME-FOLD algorithm ---a
heuristic-based inductive logic programming (ILP) algorithm capable of learning
non-monotonic logic programs---that we apply to a transformed dataset produced
by LIME. Our proposed approach is agnostic to the choice of the ILP algorithm.
Our experiments with UCI standard benchmarks suggest a significant improvement
in terms of classification evaluation metrics. Meanwhile, the number of induced
rules dramatically decreases compared to ALEPH, a state-of-the-art ILP system
Binarized support vector machines
The widely used Support Vector Machine (SVM) method has shown to yield very good results in
Supervised Classification problems. Other methods such as Classification Trees have become
more popular among practitioners than SVM thanks to their interpretability, which is an important
issue in Data Mining.
In this work, we propose an SVM-based method that automatically detects the most important
predictor variables, and the role they play in the classifier. In particular, the proposed method is
able to detect those values and intervals which are critical for the classification. The method
involves the optimization of a Linear Programming problem, with a large number of decision
variables. The numerical experience reported shows that a rather direct use of the standard
Column-Generation strategy leads to a classification method which, in terms of classification
ability, is competitive against the standard linear SVM and Classification Trees. Moreover, the
proposed method is robust, i.e., it is stable in the presence of outliers and invariant to change of
scale or measurement units of the predictor variables.
When the complexity of the classifier is an important issue, a wrapper feature selection method is
applied, yielding simpler, still competitive, classifiers
- …