20,464 research outputs found
Efficient Intrusion Detection Model Using Ensemble Methods
Ensemble method or any combination model train multiple learners to solve the classification or regression problems, not by simply ordinary learning approaches that can able to construct one learner from training data rather construct a set of learners and combine them. Boosting algorithm is one of the most important recent developments in the area of classification methodology. Boosting belongs to a family of algorithms that has the capability to convert a group of weak learners to strong learners. Boosting works in a sequential manner by adding a classification algorithm to the next updated weight of the training samples by doing the majority voting technique of the sequence of classifiers. The boosting method combines the weak models to produce a powerful one and reduces the bias of the combined model. AdaBoost algorithm is the most influential algorithm that efficiently combines the weak learners to generate a strong classifier that could be able to classify a training data with better accuracy. AdaBoost differs from the current existing boosting methods in detection accuracy, error cost minimization, computational time and detection rate. Detection accuracy and computational cost are the two main metrics used to analyze the performance of AdaBoost classification algorithm. From the simulation result, it is evident that AdaBoost algorithm could able to achieve high detection accuracy with less computational time, and minimum cost compared to a single classifier. We have proposed a predictive model to classify normal class and attack class and an online inference engine is being imposed, either to allow or deny access to a network
Boosting Simple Learners
Boosting is a celebrated machine learning approach which is based on the idea
of combining weak and moderately inaccurate hypotheses to a strong and accurate
one. We study boosting under the assumption that the weak hypotheses belong to
a class of bounded capacity. This assumption is inspired by the common
convention that weak hypotheses are "rules-of-thumbs" from an "easy-to-learn
class". (Schapire and Freund '12, Shalev-Shwartz and Ben-David '14.) Formally,
we assume the class of weak hypotheses has a bounded VC dimension. We focus on
two main questions: (i) Oracle Complexity: How many weak hypotheses are needed
in order to produce an accurate hypothesis? We design a novel boosting
algorithm and demonstrate that it circumvents a classical lower bound by Freund
and Schapire ('95, '12). Whereas the lower bound shows that
weak hypotheses with -margin are sometimes
necessary, our new method requires only weak
hypothesis, provided that they belong to a class of bounded VC dimension.
Unlike previous boosting algorithms which aggregate the weak hypotheses by
majority votes, the new boosting algorithm uses more complex ("deeper")
aggregation rules. We complement this result by showing that complex
aggregation rules are in fact necessary to circumvent the aforementioned lower
bound. (ii) Expressivity: Which tasks can be learned by boosting weak
hypotheses from a bounded VC class? Can complex concepts that are "far away"
from the class be learned? Towards answering the first question we identify a
combinatorial-geometric parameter which captures the expressivity of
base-classes in boosting. As a corollary we provide an affirmative answer to
the second question for many well-studied classes, including half-spaces and
decision stumps. Along the way, we establish and exploit connections with
Discrepancy Theory.Comment: A minor revision according to STOC review
Private Learning Implies Online Learning: An Efficient Reduction
We study the relationship between the notions of differentially private
learning and online learning in games. Several recent works have shown that
differentially private learning implies online learning, but an open problem of
Neel, Roth, and Wu \cite{NeelAaronRoth2018} asks whether this implication is
{\it efficient}. Specifically, does an efficient differentially private learner
imply an efficient online learner? In this paper we resolve this open question
in the context of pure differential privacy. We derive an efficient black-box
reduction from differentially private learning to online learning from expert
advice
A Confidence-Based Approach for Balancing Fairness and Accuracy
We study three classical machine learning algorithms in the context of
algorithmic fairness: adaptive boosting, support vector machines, and logistic
regression. Our goal is to maintain the high accuracy of these learning
algorithms while reducing the degree to which they discriminate against
individuals because of their membership in a protected group.
Our first contribution is a method for achieving fairness by shifting the
decision boundary for the protected group. The method is based on the theory of
margins for boosting. Our method performs comparably to or outperforms previous
algorithms in the fairness literature in terms of accuracy and low
discrimination, while simultaneously allowing for a fast and transparent
quantification of the trade-off between bias and error.
Our second contribution addresses the shortcomings of the bias-error
trade-off studied in most of the algorithmic fairness literature. We
demonstrate that even hopelessly naive modifications of a biased algorithm,
which cannot be reasonably said to be fair, can still achieve low bias and high
accuracy. To help to distinguish between these naive algorithms and more
sensible algorithms we propose a new measure of fairness, called resilience to
random bias (RRB). We demonstrate that RRB distinguishes well between our naive
and sensible fairness algorithms. RRB together with bias and accuracy provides
a more complete picture of the fairness of an algorithm
- …