45 research outputs found
High Performance Data Mining Techniques For Intrusion Detection
The rapid growth of computers transformed the way in which information and data was stored. With this new paradigm of data access, comes the threat of this information being exposed to unauthorized and unintended users. Many systems have been developed which scrutinize the data for a deviation from the normal behavior of a user or system, or search for a known signature within the data. These systems are termed as Intrusion Detection Systems (IDS). These systems employ different techniques varying from statistical methods to machine learning algorithms. Intrusion detection systems use audit data generated by operating systems, application softwares or network devices. These sources produce huge amount of datasets with tens of millions of records in them. To analyze this data, data mining is used which is a process to dig useful patterns from a large bulk of information. A major obstacle in the process is that the traditional data mining and learning algorithms are overwhelmed by the bulk volume and complexity of available data. This makes these algorithms impractical for time critical tasks like intrusion detection because of the large execution time. Our approach towards this issue makes use of high performance data mining techniques to expedite the process by exploiting the parallelism in the existing data mining algorithms and the underlying hardware. We will show that how high performance and parallel computing can be used to scale the data mining algorithms to handle large datasets, allowing the data mining component to search a much larger set of patterns and models than traditional computational platforms and algorithms would allow. We develop parallel data mining algorithms by parallelizing existing machine learning techniques using cluster computing. These algorithms include parallel backpropagation and parallel fuzzy ARTMAP neural networks. We evaluate the performances of the developed models in terms of speedup over traditional algorithms, prediction rate and false alarm rate. Our results showed that the traditional backpropagation and fuzzy ARTMAP algorithms can benefit from high performance computing techniques which make them well suited for time critical tasks like intrusion detection
Design for novel enhanced weightless neural network and multi-classifier.
Weightless neural systems have often struggles in terms of speed, performances, and memory issues. There is also lack of sufficient interfacing of weightless neural systems to others systems. Addressing these issues motivates and forms the aims and objectives of this thesis. In addressing these issues, algorithms are formulated, classifiers, and multi-classifiers are designed, and hardware design of classifier are also reported. Specifically, the purpose of this thesis is to report on the algorithms and designs of weightless neural systems.
A background material for the research is a weightless neural network known as Probabilistic Convergent Network (PCN). By introducing two new and different interfacing method, the word "Enhanced" is added to PCN thereby giving it the name Enhanced Probabilistic Convergent Network (EPCN). To solve the problem of speed and performances when large-class databases are employed in data analysis, multi-classifiers are designed whose composition vary depending on problem complexity. It also leads to the introduction of a novel gating function with application of EPCN as an intelligent combiner. For databases which are not very large, single classifiers suffices. Speed and ease of application in adverse condition were considered as improvement which has led to the design of EPCN in hardware. A novel hashing function is implemented and tested on hardware-based EPCN.
Results obtained have indicated the utility of employing weightless neural systems. The results obtained also indicate significant new possible areas of application of weightless neural systems
Neuroengineering of Clustering Algorithms
Cluster analysis can be broadly divided into multivariate data visualization, clustering algorithms, and cluster validation. This dissertation contributes neural network-based techniques to perform all three unsupervised learning tasks. Particularly, the first paper provides a comprehensive review on adaptive resonance theory (ART) models for engineering applications and provides context for the four subsequent papers. These papers are devoted to enhancements of ART-based clustering algorithms from (a) a practical perspective by exploiting the visual assessment of cluster tendency (VAT) sorting algorithm as a preprocessor for ART offline training, thus mitigating ordering effects; and (b) an engineering perspective by designing a family of multi-criteria ART models: dual vigilance fuzzy ART and distributed dual vigilance fuzzy ART (both of which are capable of detecting complex cluster structures), merge ART (aggregates partitions and lessens ordering effects in online learning), and cluster validity index vigilance in fuzzy ART (features a robust vigilance parameter selection and alleviates ordering effects in offline learning). The sixth paper consists of enhancements to data visualization using self-organizing maps (SOMs) by depicting in the reduced dimension and topology-preserving SOM grid information-theoretic similarity measures between neighboring neurons. This visualization\u27s parameters are estimated using samples selected via a single-linkage procedure, thereby generating heatmaps that portray more homogeneous within-cluster similarities and crisper between-cluster boundaries. The seventh paper presents incremental cluster validity indices (iCVIs) realized by (a) incorporating existing formulations of online computations for clusters\u27 descriptors, or (b) modifying an existing ART-based model and incrementally updating local density counts between prototypes. Moreover, this last paper provides the first comprehensive comparison of iCVIs in the computational intelligence literature --Abstract, page iv
Multi-tier framework for the inferential measurement and data-driven modeling
A framework for the inferential measurement and data-driven modeling has been proposed and assessed in several real-world application domains. The architecture of the framework has been structured in multiple tiers to facilitate extensibility and the integration of new components. Each of the proposed four tiers has been assessed in an uncoupled way to verify their suitability. The first tier, dealing with exploratory data analysis, has been assessed with the characterization of the chemical space related to the biodegradation of organic chemicals. This analysis has established relationships between physicochemical variables and biodegradation rates that have been used for model development. At the preprocessing level, a novel method for feature selection based on dissimilarity measures between Self-Organizing maps (SOM) has been developed and assessed. The proposed method selected more features than others published in literature but leads to models with improved predictive power. Single and multiple data imputation techniques based on the SOM have also been used to recover missing data in a Waste Water Treatment Plant benchmark. A new dynamic method to adjust the centers and widths of in Radial basis Function networks has been proposed to predict water quality. The proposed method outperformed other neural networks. The proposed modeling components have also been assessed in the development of prediction and classification models for biodegradation rates in different media. The results obtained proved the suitability of this approach to develop data-driven models when the complex dynamics of the process prevents the formulation of mechanistic models. The use of rule generation algorithms and Bayesian dependency models has been preliminary screened to provide the framework with interpretation capabilities. Preliminary results obtained from the classification of Modes of Toxic Action (MOA) indicate that this could be a promising approach to use MOAs as proxy indicators of human health effects of chemicals.Finally, the complete framework has been applied to three different modeling scenarios. A virtual sensor system, capable of inferring product quality indices from primary process variables has been developed and assessed. The system was integrated with the control system in a real chemical plant outperforming multi-linear correlation models usually adopted by chemical manufacturers. A model to predict carcinogenicity from molecular structure for a set of aromatic compounds has been developed and tested. Results obtained after the application of the SOM-dissimilarity feature selection method yielded better results than models published in the literature. Finally, the framework has been used to facilitate a new approach for environmental modeling and risk management within geographical information systems (GIS). The SOM has been successfully used to characterize exposure scenarios and to provide estimations of missing data through geographic interpolation. The combination of SOM and Gaussian Mixture models facilitated the formulation of a new probabilistic risk assessment approach.Aquesta tesi proposa i avalua en diverses aplicacions reals, un marc general de treball per al desenvolupament de sistemes de mesurament inferencial i de modelat basats en dades. L'arquitectura d'aquest marc de treball s'organitza en diverses capes que faciliten la seva extensibilitat aixĂ com la integraciĂł de nous components. Cadascun dels quatre nivells en que s'estructura la proposta de marc de treball ha estat avaluat de forma independent per a verificar la seva funcionalitat. El primer que nivell s'ocupa de l'anĂ lisi exploratòria de dades ha esta avaluat a partir de la caracteritzaciĂł de l'espai quĂmic corresponent a la biodegradaciĂł de certs compostos orgĂ nics. Fruit d'aquest anĂ lisi s'han establert relacions entre diverses variables fĂsico-quĂmiques que han estat emprades posteriorment per al desenvolupament de models de biodegradaciĂł. A nivell del preprocĂ©s de les dades s'ha desenvolupat i avaluat una nova metodologia per a la selecciĂł de variables basada en l'Ăşs del Mapes Autoorganitzats (SOM). Tot i que el mètode proposat selecciona, en general, un major nombre de variables que altres mètodes proposats a la literatura, els models resultants mostren una millor capacitat predictiva. S'han avaluat tambĂ© tot un conjunt de tècniques d'imputaciĂł de dades basades en el SOM amb un conjunt de dades estĂ ndard corresponent als parĂ metres d'operaciĂł d'una planta de tractament d'aigĂĽes residuals. Es proposa i avalua en un problema de predicciĂł de qualitat en aigua un nou model dinĂ mic per a ajustar el centre i la dispersiĂł en xarxes de funcions de base radial. El mètode proposat millora els resultats obtinguts amb altres arquitectures neuronals. Els components de modelat proposat s'han aplicat tambĂ© al desenvolupament de models predictius i de classificaciĂł de les velocitats de biodegradaciĂł de compostos orgĂ nics en diferents medis. Els resultats obtinguts demostren la viabilitat d'aquesta aproximaciĂł per a desenvolupar models basats en dades en aquells casos en els que la complexitat de dinĂ mica del procĂ©s impedeix formular models mecanicistes. S'ha dut a terme un estudi preliminar de l'Ăşs de algorismes de generaciĂł de regles i de grafs de dependència bayesiana per a introduir una nova capa que faciliti la interpretaciĂł dels models. Els resultats preliminars obtinguts a partir de la classificaciĂł dels Modes d'acciĂł Tòxica (MOA) apunten a que l'Ăşs dels MOA com a indicadors intermediaris dels efectes dels compostos quĂmics en la salut Ă©s una aproximaciĂł factible.Finalment, el marc de treball proposat s'ha aplicat en tres escenaris de modelat diferents. En primer lloc, s'ha desenvolupat i avaluat un sensor virtual capaç d'inferir Ăndexs de qualitat a partir de variables primĂ ries de procĂ©s. El sensor resultant ha estat implementat en una planta quĂmica real millorant els resultats de les correlacions multilineals emprades habitualment. S'ha desenvolupat i avaluat un model per a predir els efectes carcinògens d'un grup de compostos aromĂ tics a partir de la seva estructura molecular. Els resultats obtinguts desprès d'aplicar el mètode de selecciĂł de variables basat en el SOM milloren els resultats prèviament publicats. Aquest marc de treball s'ha usat tambĂ© per a proporcionar una nova aproximaciĂł al modelat ambiental i l'anĂ lisi de risc amb sistemes d'informaciĂł geogrĂ fica (GIS). S'ha usat el SOM per a caracteritzar escenaris d'exposiciĂł i per a desenvolupar un nou mètode d'interpolaciĂł geogrĂ fica. La combinaciĂł del SOM amb els models de mescla de gaussianes dona una nova formulaciĂł al problema de l'anĂ lisi de risc des d'un punt de vista probabilĂstic
Computational intelligence techniques for missing data imputation
Despite considerable advances in missing data imputation techniques over the last three decades, the
problem of missing data remains largely unsolved. Many techniques have emerged in the literature
as candidate solutions, including the Expectation Maximisation (EM), and the combination of autoassociative
neural networks and genetic algorithms (NN-GA). The merits of both these techniques
have been discussed at length in the literature, but have never been compared to each other. This
thesis contributes to knowledge by firstly, conducting a comparative study of these two techniques..
The significance of the difference in performance of the methods is presented. Secondly, predictive
analysis methods suitable for the missing data problem are presented. The predictive analysis in
this problem is aimed at determining if data in question are predictable and hence, to help in
choosing the estimation techniques accordingly. Thirdly, a novel treatment of missing data for online
condition monitoring problems is presented. An ensemble of three autoencoders together with
hybrid Genetic Algorithms (GA) and fast simulated annealing was used to approximate missing
data. Several significant insights were deduced from the simulation results. It was deduced that for
the problem of missing data using computational intelligence approaches, the choice of optimisation
methods plays a significant role in prediction. Although, it was observed that hybrid GA and Fast
Simulated Annealing (FSA) can converge to the same search space and to almost the same values
they differ significantly in duration. This unique contribution has demonstrated that a particular
interest has to be paid to the choice of optimisation techniques and their decision boundaries.
iii
Another unique contribution of this work was not only to demonstrate that a dynamic programming
is applicable in the problem of missing data, but to also show that it is efficient in addressing the
problem of missing data. An NN-GA model was built to impute missing data, using the principle
of dynamic programing. This approach makes it possible to modularise the problem of missing
data, for maximum efficiency. With the advancements in parallel computing, various modules of
the problem could be solved by different processors, working together in parallel. Furthermore, a
method for imputing missing data in non-stationary time series data that learns incrementally even
when there is a concept drift is proposed. This method works by measuring the heteroskedasticity
to detect concept drift and explores an online learning technique. New direction for research, where
missing data can be estimated for nonstationary applications are opened by the introduction of this
novel method. Thus, this thesis has uniquely opened the doors of research to this area. Many
other methods need to be developed so that they can be compared to the unique existing approach
proposed in this thesis.
Another novel technique for dealing with missing data for on-line condition monitoring problem was
also presented and studied. The problem of classifying in the presence of missing data was addressed,
where no attempts are made to recover the missing values. The problem domain was then extended
to regression. The proposed technique performs better than the NN-GA approach, both in accuracy
and time efficiency during testing. The advantage of the proposed technique is that it eliminates
the need for finding the best estimate of the data, and hence, saves time. Lastly, instead of using
complicated techniques to estimate missing values, an imputation approach based on rough sets is
explored. Empirical results obtained using both real and synthetic data are given and they provide a
valuable and promising insight to the problem of missing data. The work, has significantly confirmed
that rough sets can be reliable for missing data estimation in larger and real databases
The impact of missing data imputation on HIV classification
Missing data are a part of research and data analysis that often cannot be ignored. Although a
number of methods have been developed in handling and imputing missing data, this problem
is, for the most part, still unsolved with many researchers still struggling with its existence.
Due to the availability of software and the advancement of computational power, maximum
likelihood and multiple imputations as well as neural networks and genetic algorithms
(AANN-GA) have been introduced as solutions to the missing data problem. Although these
methods have given considerable results in this domain, the impact that missing data and
missing data imputation has on decision making has, until recently, not been assessed. This
dissertation contributes to this knowledge by first introducing a new computational intelligent
model that integrates Neuro-Fuzzy (N-F) modeling, Principal Component Analysis and the
genetic algorithms to impute missing data. The performance of this model is then compared
to that of the AANN-GA as well as the independent use of the N-F architecture. In order to
determine if the data are predictable and also to assist in processing the data for training, an
analysis on the HIV sero-prevalence data is performed.
Two classification decision making frameworks are then presented in order to assess the
effect of missing data. These decision frameworks are trained to classify between two
conditions when presented with a set of data variables. The first is the use of a Bayesian
neural network which is statistical in nature and the second is based on the fuzzy ARTMAP
(FAM) classifier which has incremental abilities. The two methods are used and compared in
order to assess the degree in which missing data, and the imputation thereof, has on decision
making. The effect of missing data differs for the two frameworks; while the Bayesian neural
network fails in the presence of missing data, the FAM classifier attempts to classify with a
decreased accuracy. This work has shown that although missing data and the imputation
thereof has an effect on decision making, the degree of that effect is dependent on the
decision making framework and on the model used for data imputation
Machine learning based data pre-processing for the purpose of medical data mining and decision support
Building an accurate and reliable model for prediction for different application domains, is one of the most significant challenges in knowledge discovery and data mining. Sometimes, improved data quality is itself the goal of the analysis, usually to improve processes in a production database and the designing of decision support. As medicine moves forward there is a need for sophisticated decision support systems that make use of data mining to support more orthodox knowledge engineering and Health Informatics practice. However, the real-life medical data rarely complies with the requirements of various data mining tools. It is often inconsistent, noisy, containing redundant attributes, in an unsuitable format, containing missing values and imbalanced with regards to the outcome class label.Many real-life data sets are incomplete, with missing values. In medical data mining the problem with missing values has become a challenging issue. In many clinical trials, the medical report pro-forma allow some attributes to be left blank, because they are inappropriate for some class of illness or the person providing the information feels that it is not appropriate to record the values for some attributes. The research reported in this thesis has explored the use of machine learning techniques as missing value imputation methods. The thesis also proposed a new way of imputing missing value by supervised learning. A classifier was used to learn the data patterns from a complete data sub-set and the model was later used to predict the missing values for the full dataset. The proposed machine learning based missing value imputation was applied on the thesis data and the results are compared with traditional Mean/Mode imputation. Experimental results show that all the machine learning methods which we explored outperformed the statistical method (Mean/Mode).The class imbalance problem has been found to hinder the performance of learning systems. In fact, most of the medical datasets are found to be highly imbalance in their class label. The solution to this problem is to reduce the gap between the minority class samples and the majority class samples. Over-sampling can be applied to increase the number of minority class sample to balance the data. The alternative to over-sampling is under-sampling where the size of majority class sample is reduced. The thesis proposed one cluster based under-sampling technique to reduce the gap between the majority and minority samples. Different under-sampling and over-sampling techniques were explored as ways to balance the data. The experimental results show that for the thesis data the new proposed modified cluster based under-sampling technique performed better than other class balancing techniques.In further research it is found that the class imbalance problem not only affects the classification performance but also has an adverse effect on feature selection. The thesis proposed a new framework for feature selection for class imbalanced datasets. The research found that, using the proposed framework the classifier needs less attributes to show high accuracy, and more attributes are needed if the data is highly imbalanced.The research described in the thesis contains the flowing four novel main contributions.a) Improved data mining methodology for mining medical datab) Machine learning based missing value imputation methodc) Cluster Based semi-supervised class balancing methodd) Feature selection framework for class imbalance datasetsThe performance analysis and comparative study show that the use of proposed method of missing value imputation, class balancing and feature selection framework can provide an effective approach to data preparation for building medical decision support