14 research outputs found

    Network Anomaly Classification by Support Vector Classifiers Ensemble and Non-linear Projection Techniques

    Get PDF
    Network anomaly detection is currently a challenge due to the number of different attacks and the number of potential attackers. Intrusion detection systems aim to detect misuses or network anomalies in order to block ports or connections, whereas firewalls act according to a predefined set of rules. However, detecting the specific anomaly provides valuable information about the attacker that may be used to further protect the system, or to react accordingly. This way, detecting network intrusions is a current challenge due to growth of the Internet and the number of potential intruders. In this paper we present an intrusion detection technique using an ensemble of support vector classifiers and dimensionality reduction techniques to generate a set of discriminant features. The results obtained using the NSL-KDD dataset outperforms previously obtained classification rates

    Maximizing upgrading and downgrading margins for ordinal regression

    Get PDF
    In ordinal regression, a score function and threshold values are sought to classify a set of objects into a set of ranked classes. Classifying an individual in a class with higher (respectively lower) rank than its actual rank is called an upgrading (respectively downgrading) error. Since upgrading and downgrading errors may not have the same importance, they should be considered as two different criteria to be taken into account when measuring the quality of a classifier. In Support Vector Machines, margin maximization is used as an effective and computationally tractable surrogate of the minimization of misclassification errors. As an extension, we consider in this paper the maximization of upgrading and downgrading margins as a surrogate of the minimization of upgrading and downgrading errors, and we address the biobjective problem of finding a classifier maximizing simultaneously the two margins. The whole set of Pareto-optimal solutions of such biobjective problem is described as translations of the optimal solutions of a scalar optimization problem. For the most popular case in which the Euclidean norm is considered, the scalar problem has a unique solution, yielding that all the Pareto-optimal solutions of the biobjective problem are translations of each other. Hence, the Pareto-optimal solutions can easily be provided to the analyst, who, after inspection of the misclassification errors caused, should choose in a later stage the most convenient classifier. The consequence of this analysis is that it provides a theoretical foundation for a popular strategy among practitioners, based on the so-called ROC curve, which is shown here to equal the set of Pareto-optimal solutions of maximizing simultaneously the downgrading and upgrading margins

    G92-1066 Agricultural Retirement Packages

    Get PDF
    This NebGuide discusses Simplified Employee Pension plans and Keogh plans as an employee benefit provided by agricultural employers. Retirement Plans Various retirement packages allow pre-tax dollars to be used to save money until retirement age. The most familiar and easy to use retirement account is the Individual Retirement Account (IRA). Other accounts are Simplified Employee Pension (SEP) plans and Keogh plans. This NebGuide discusses IRA, SEP and Keogh plans in the context of an employee benefit provided by the employer

    Evaluation of Error-Sensitive Attributes

    Full text link
    Numerous attribute selection frameworks have been developed to improve performance and results in the research field of machine learning and data classification (Guyon & Elisseeff 2003; Saeys, Inza & Larranaga 2007), majority of the effort has focused on the performance and cost factors, with a primary aim to examine and enhance the logic and sophistication of the underlying components and methods of specific classification models, such as a variety of wrapper, filter and cluster algorithms for feature selection, to work as a data pre-process step or embedded as an integral part of a specific classification process. Taking a different approach, our research is to study the relationship between classification errors and data attributes not before, not during, but after the fact, to evaluate risk levels of attributes and identify the ones that may be more prone to errors based on such a post-classification analysis and a proposed attribute-risk evaluation routine. Possible benefits from this research can be to help develop error reduction measures and to investigate specific relationship between attributes and errors in a more efficient and effective way. Initial experiments have shown some supportive results, and the unsupportive results can also be explained by a hypothesis extended from this evaluation proposal

    Machine Learning Techniques—Reductions Between Prediction Quality Metrics

    No full text
    Abstract Machine learning involves optimizing a loss function on unlabeled data points given examples of labeled data points, where the loss function measures the performance of a learning algorithm. We give an overview of techniques, called reductions, for converting a problem of minimizing one loss function into a problem of minimizing another, simpler loss function. This tutorial discusses how to create robust reductions that perform well in practice. The reductions discussed here can be used to solve any supervised learning problem with a standard binary classification or regression algorithm available in any machine learning toolkit. We also discuss common design flaws in folklore reductions.
    corecore