5,324 research outputs found
Conditional probability estimation
This paper studies in particular an aspect of the estimation of conditional probability distributions by maximum likelihood that seems to have been overlooked in the literature on Bayesian networks: The information conveyed by the conditioning event should be included in the likelihood function as well
Approximating Likelihood Ratios with Calibrated Discriminative Classifiers
In many fields of science, generalized likelihood ratio tests are established
tools for statistical inference. At the same time, it has become increasingly
common that a simulator (or generative model) is used to describe complex
processes that tie parameters of an underlying theory and measurement
apparatus to high-dimensional observations .
However, simulator often do not provide a way to evaluate the likelihood
function for a given observation , which motivates a new class of
likelihood-free inference algorithms. In this paper, we show that likelihood
ratios are invariant under a specific class of dimensionality reduction maps
. As a direct consequence, we show that
discriminative classifiers can be used to approximate the generalized
likelihood ratio statistic when only a generative model for the data is
available. This leads to a new machine learning-based approach to
likelihood-free inference that is complementary to Approximate Bayesian
Computation, and which does not require a prior on the model parameters.
Experimental results on artificial problems with known exact likelihoods
illustrate the potential of the proposed method.Comment: 35 pages, 5 figure
The Overlooked Potential of Generalized Linear Models in Astronomy - I: Binomial Regression
Revealing hidden patterns in astronomical data is often the path to
fundamental scientific breakthroughs; meanwhile the complexity of scientific
inquiry increases as more subtle relationships are sought. Contemporary data
analysis problems often elude the capabilities of classical statistical
techniques, suggesting the use of cutting edge statistical methods. In this
light, astronomers have overlooked a whole family of statistical techniques for
exploratory data analysis and robust regression, the so-called Generalized
Linear Models (GLMs). In this paper -- the first in a series aimed at
illustrating the power of these methods in astronomical applications -- we
elucidate the potential of a particular class of GLMs for handling
binary/binomial data, the so-called logit and probit regression techniques,
from both a maximum likelihood and a Bayesian perspective. As a case in point,
we present the use of these GLMs to explore the conditions of star formation
activity and metal enrichment in primordial minihaloes from cosmological
hydro-simulations including detailed chemistry, gas physics, and stellar
feedback. We predict that for a dark mini-halo with metallicity , an increase of in the gas
molecular fraction, increases the probability of star formation occurrence by a
factor of 75%. Finally, we highlight the use of receiver operating
characteristic curves as a diagnostic for binary classifiers, and ultimately we
use these to demonstrate the competitive predictive performance of GLMs against
the popular technique of artificial neural networks.Comment: 20 pages, 10 figures, 3 tables, accepted for publication in Astronomy
and Computin
Improving Accuracy and Performance of Customer Churn Prediction Using Feature Reduction Algorithms
Prediction of customer churn is one of the most essential activities in Customer Relationship Management (CRM). However, the state-of-the-art of the customer churn prediction approach only focuses on the classifier selection in improving the accuracy and performance of churn prediction, but rarely contemplate the feature reduction algorithms. Furthermore, there are numerous attributes that contribute to customer churn and it is crucial to determine the most substantial features in order to acquire the highest prediction accuracy and to improve the prediction performance. Feature reduction decreases the dimensionality of the information and may allow learning algorithms to function faster and more effectively and able to produce predictive models that deliver the highest rate of accuracy. In this research, we investigated and proposed two (2) different feature reduction algorithms which are Correlation based Feature Selection (CFS) and Information Gain (IG) and built classification models based on three 3) different classifiers, namely Bayes Net, Simple Logistic and Decision Table. Experimental results demonstrate that the performance of classifiers improves with the application of features reduction of the customer churn data set. A CFS feature reduction algorithm with the Decision Table classifier yields the highest accuracy of 92.08% and has the lowest RMSE of 0.2554. This study recommends the use of feature reduction algorithms in the context of CRM for churn prediction to improve accuracy and performance of customer churn prediction
Using online linear classifiers to filter spam Emails
The performance of two online linear classifiers - the Perceptron and Littlestone’s Winnow – is explored for two anti-spam filtering benchmark corpora - PU1 and Ling-Spam. We study the performance for varying numbers of features, along with three different feature selection methods: Information Gain (IG), Document Frequency (DF) and Odds Ratio. The size of the training set and the number of training iterations are also investigated for both classifiers. The experimental results show that both the Perceptron and Winnow perform much better when using IG or DF than using Odds Ratio. It is further demonstrated that when using IG or DF, the classifiers are insensitive to the number of features and the number of training iterations, and not greatly sensitive to the size of training set. Winnow is shown to slightly outperform the Perceptron. It is also demonstrated that both of these online classifiers perform much better than a standard Naïve Bayes method. The theoretical and implementation computational complexity of these two classifiers are very low, and they are very easily adaptively updated. They outperform most of the published results, while being significantly easier to train and adapt. The analysis and promising experimental results indicate that the Perceptron and Winnow are two very competitive classifiers for anti-spam filtering
The Theory Behind Overfitting, Cross Validation, Regularization, Bagging, and Boosting: Tutorial
In this tutorial paper, we first define mean squared error, variance,
covariance, and bias of both random variables and classification/predictor
models. Then, we formulate the true and generalization errors of the model for
both training and validation/test instances where we make use of the Stein's
Unbiased Risk Estimator (SURE). We define overfitting, underfitting, and
generalization using the obtained true and generalization errors. We introduce
cross validation and two well-known examples which are -fold and
leave-one-out cross validations. We briefly introduce generalized cross
validation and then move on to regularization where we use the SURE again. We
work on both and norm regularizations. Then, we show that
bootstrap aggregating (bagging) reduces the variance of estimation. Boosting,
specifically AdaBoost, is introduced and it is explained as both an additive
model and a maximum margin model, i.e., Support Vector Machine (SVM). The upper
bound on the generalization error of boosting is also provided to show why
boosting prevents from overfitting. As examples of regularization, the theory
of ridge and lasso regressions, weight decay, noise injection to input/weights,
and early stopping are explained. Random forest, dropout, histogram of oriented
gradients, and single shot multi-box detector are explained as examples of
bagging in machine learning and computer vision. Finally, boosting tree and SVM
models are mentioned as examples of boosting.Comment: 23 pages, 9 figure
- …