5,027 research outputs found

    An Intelligent Agent Based Intrusion Detection System Using Fuzzy Rough Set Based Outlier Detection

    Get PDF
    Since existing Intrusion Detection Systems (IDS) including misuse detection and anomoly detection are generally incapable of detecting new type of attacks. However, all these systems are capable of detecting intruders with high false alarm rate. It is an urgent need to develop IDS with very high Detection rate and with low False alarm rate. To satisfy this need we propose a new intelligent agent based IDS using Fuzzy Rough Set based outlier detection and Fuzzy Rough set based SVM. In this proposed model we intorduced two different inteligent agents namely feature selection agent to select the required feature set using fuzzy rough sets and decision making agent manager for making final decision. Moreover, we have introduced fuzzy rough set based outlier detection algorithm to detect outliers. We have also adopted Fuzzy Rough based SVM in our system to classify and detect anomalies efficiently. Finally, we have used KDD Cup 99 data set for our experiment, the experimental result show that the proposed intelligent agent based model improves the overall accuracy and reduces the false alarm rate

    A Modified Distance Dynamics Model for Improvement of Community Detection

    Get PDF
    © 2018 IEEE. Community detection is a key technique for identifying the intrinsic community structures of complex networks. The distance dynamics model has been proven effective in finding communities with arbitrary size and shape and identifying outliers. However, to simulate distance dynamics, the model requires manual parameter specification and is sensitive to the cohesion threshold parameter, which is difficult to determine. Furthermore, it has difficulty handling rough outliers and ignores hubs (nodes that bridge communities). In this paper, we propose a robust distance dynamics model, namely, Attractor++, which uses a dynamic membership degree. In Attractor++, the dynamic membership degree is used to determine the influence of exclusive neighbors on the distance instead of setting the cohesion threshold. In addition, considering its inefficiency and low accuracy in handling outliers and identifying hubs, we design an outlier optimization model that is based on triangle adjacency. By using optimization rules, a postprocessing method further judges whether a singleton node should be merged into the same community as its triangles or regarded as a hub or an outlier. Extensive experiments on both real-world and synthetic networks demonstrate that our algorithm more accurately identifies nodes that have special roles (hubs and outliers) and more effectively identifies community structures

    Associations of Dwarf Galaxies

    Full text link
    Hubble Space Telescope Advanced Cameras for Surveys has been used to determine accurate distances for 20 galaxies from measurements of the luminosity of the brightest red giant branch stars. Five associations of dwarf galaxies that had originally been identified based on strong correlations on the plane of the sky and in velocity are shown to be equally well correlated in distance. Two more associations with similar properties have been discovered. Another association is identified that is suggested to be unbound through tidal disruption. The associations have the spatial and kinematic properties expected of bound structures with 1 - 10 x 10^11 solar mass. However, these entities have little light with the consequence that mass-to-light ratios are in the range 100 - 1000 in solar units. Within a well surveyed volume extending to 3 Mpc, all but one known galaxy lies within one of the groups or associations that have been identified.Comment: 50 pages, 2 tables, 15 encapsulated figures, 1 (3 part) jpg figure. Submitted to Astronomical Journa

    Robust PCA as Bilinear Decomposition with Outlier-Sparsity Regularization

    Full text link
    Principal component analysis (PCA) is widely used for dimensionality reduction, with well-documented merits in various applications involving high-dimensional data, including computer vision, preference measurement, and bioinformatics. In this context, the fresh look advocated here permeates benefits from variable selection and compressive sampling, to robustify PCA against outliers. A least-trimmed squares estimator of a low-rank bilinear factor analysis model is shown closely related to that obtained from an â„“0\ell_0-(pseudo)norm-regularized criterion encouraging sparsity in a matrix explicitly modeling the outliers. This connection suggests robust PCA schemes based on convex relaxation, which lead naturally to a family of robust estimators encompassing Huber's optimal M-class as a special case. Outliers are identified by tuning a regularization parameter, which amounts to controlling sparsity of the outlier matrix along the whole robustification path of (group) least-absolute shrinkage and selection operator (Lasso) solutions. Beyond its neat ties to robust statistics, the developed outlier-aware PCA framework is versatile to accommodate novel and scalable algorithms to: i) track the low-rank signal subspace robustly, as new data are acquired in real time; and ii) determine principal components robustly in (possibly) infinite-dimensional feature spaces. Synthetic and real data tests corroborate the effectiveness of the proposed robust PCA schemes, when used to identify aberrant responses in personality assessment surveys, as well as unveil communities in social networks, and intruders from video surveillance data.Comment: 30 pages, submitted to IEEE Transactions on Signal Processin

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    Unsupervised Contact Learning for Humanoid Estimation and Control

    Full text link
    This work presents a method for contact state estimation using fuzzy clustering to learn contact probability for full, six-dimensional humanoid contacts. The data required for training is solely from proprioceptive sensors - endeffector contact wrench sensors and inertial measurement units (IMUs) - and the method is completely unsupervised. The resulting cluster means are used to efficiently compute the probability of contact in each of the six endeffector degrees of freedom (DoFs) independently. This clustering-based contact probability estimator is validated in a kinematics-based base state estimator in a simulation environment with realistic added sensor noise for locomotion over rough, low-friction terrain on which the robot is subject to foot slip and rotation. The proposed base state estimator which utilizes these six DoF contact probability estimates is shown to perform considerably better than that which determines kinematic contact constraints purely based on measured normal force.Comment: Submitted to the IEEE International Conference on Robotics and Automation (ICRA) 201

    A Distributed Clustering Approach for Heterogeneous Environments Using Fuzzy Rough Set Theory

    Get PDF
    Vast majority of data mining algorithms have been designed to work on centralized data, unfortunately however, almost all of nowadays data sets are distributed both geographically and conceptually. Due to privacy and computation cost, centralizing distributed data sets before analyzing them is undoubtedly impractical. In this paper, we present a framework for clustering distributed data which takes into account privacy and computation cost. To do that, we remove uncertain instances and just send the label of the other instances to the central location. To remove the uncertain instances, we develop a new instance weighting method based on fuzzy and rough set theory. The achieved results on well-known data verify effectiveness of the proposed method compared to previous works
    • …
    corecore