108 research outputs found

    Nuevos retos en clasificación asociativa: Big Data y aplicaciones

    Get PDF
    La clasificación asociativa surge como resultado de la unión de dos importantes ámbitos del aprendizaje automático. Por un lado la tarea descriptiva de extracción de reglas de asociación, como mecanismo para obtener información previamente desconocida e interesante de un conjunto de datos, combinado con una tarea predictiva, como es la clasificación, que permite en base a un conjunto de variables explicativas y previamente conocidas realizar una predicción sobre una variable de interés o predictiva. Los objetivos de esta tesis doctoral son los siguientes: 1) El estudio y el análisis del estado del arte de tanto la extracción de reglas de asociación como de la clasificación asociativa; 2) La propuesta de nuevos modelos de clasificación asociativa así como de extracción de reglas de asociación teniendo en cuenta la obtención de modelos que sean precisos, interpretables, eficientes así como flexibles para poder introducir conocimiento subjetivo en éstos. 3) Adicionalmente, y dado la gran cantidad de datos que cada día se genera en las últimas décadas, se prestará especial atención al tratamiento de grandes cantidades datos, también conocido como Big Data. En primer lugar, se ha analizado el estado del arte tanto de clasificación asociativa como de la extracción de reglas de asociación. En este sentido, se ha realizado un estudio y análisis exhaustivo de la bibliografía de los trabajos relacionados para poder conocer con gran nivel de detalle el estado del arte. Como resultado, se ha permitido sentar las bases para la consecución de los demás objetivos así como detectar que dentro de la clasificación asociativa se requería de algún mecanismo que facilitara la unificación de comparativas así como que fueran lo más completas posibles. Para tal fin, se ha propuesto una herramienta de software que cuenta con al menos un algoritmo de todas las categorías que componen la taxonomía actual. Esto permitirá dentro de las investigaciones del área, realizar comparaciones más diversas y completas que hasta el momento se consideraba una tarea en el mejor de los casos muy ardua, al no estar disponibles muchos de los algoritmos en un formato ejecutable ni mucho menos como código abierto. Además, esta herramienta también dispone de un conjunto muy diverso de métricas que permite cuantificar la calidad de los resultados desde diferentes perspectivas. Esto permite conseguir clasificadores lo más completos posibles, así como para unificar futuras comparaciones con otras propuestas. En segundo lugar, y como resultado del análisis previo, se ha detectado que las propuestas actuales no permiten escalar, ni horizontalmente, ni verticalmente, las metodologías sobre conjuntos de datos relativamente grandes. Dado el creciente interés, tanto del mundo académico como del industrial, de aumentar la capacidad de cómputo a ingentes cantidades de datos, se ha considerado interesante continuar esta tesis doctoral realizando un análisis de diferentes propuestas sobre Big Data. Para tal fin, se ha comenzado realizando un análisis pormenorizado de los últimos avances para el tratamiento de tal cantidad de datos. En este respecto, se ha prestado especial atención a la computación distribuida ya que ha demostrado ser el único procedimiento que permite el tratamiento de grandes cantidades de datos sin la realización de técnicas de muestreo. En concreto, se ha prestado especial atención a las metodologías basadas en MapReduce que permite la descomposición de problemas complejos en fracciones divisibles y paralelizables, que posteriormente pueden ser agrupadas para obtener el resultado final. Como resultado de este objetivo se han propuesto diferentes algoritmos que permiten el tratamiento de grandes cantidades de datos, sin la pérdida de precisión ni interpretabilidad. Todos los algoritmos propuestos se han diseñado para que puedan funcionar sobre las implementaciones de código abierto más conocidas de MapReduce. En tercer y último lugar, se ha considerado interesante realizar una propuesta que mejore el estado del arte de la clasificación asociativa. Para tal fin, y dado que las reglas de asociación son la base y factores determinantes para los clasificadores asociativos, se ha comenzado realizando una nueva propuesta para la extracción de reglas de asociación. En este aspecto, se ha combinado el uso de los últimos avances en computación distribuida, como MapReduce, con los algoritmos evolutivos que han demostrado obtener excelentes resultados en el área. En particular, se ha hecho uso de programación genética gramatical por su flexibilidad para codificar las soluciones, así como introducir conocimiento subjetivo en el proceso de búsqueda a la vez que permiten aliviar los requisitos computacionales y de memoria. Este nuevo algoritmo, supone una mejora significativa de la extracción de reglas de asociación ya que ha demostrado obtener mejores resultados que las propuestas existentes sobre diferentes tipos de datos así como sobre diferentes métricas de interés, es decir, no sólo obtiene mejores resultados sobre Big Data, sino que se ha comparado en su versión secuencial con los algoritmos existentes. Una vez que se ha conseguido este algoritmo que permite extraer excelentes reglas de asociación, se ha adaptado para la obtención de reglas de asociación de clase así como para obtener un clasificador a partir de tales reglas. De nuevo, se ha hecho uso de programación genética gramatical para la obtención del clasificador de forma que se permite al usuario no sólo introducir conocimiento subjetivo en las propias formas de las reglas, sino también en la forma final del clasificador. Esta nueva propuesta también se ha comparado con los algoritmos existentes de clasificación asociativa forma secuencial para garantizar que consigue diferencias significativas respecto a éstos en términos de exactitud, interpretabilidad y eficiencia. Adicionalmente, también se ha comparado con otras propuestas específicas de Big Data demostrado obtener excelentes resultados a la vez que mantiene un compromiso entre los objetivos conflictivos de interpretabilidad, exactitud y eficiencia. Esta tesis doctoral se ha desarrollado bajo un entorno experimental apropiado, haciendo uso de diversos conjunto de datos incluyendo tanto datos de pequeña dimensionalidad como Big Data. Además, todos los conjuntos de datos usados están publicados libremente y conforman un conglomerado de diversas dimensionalidades, número de instancias y de clases. Todos los resultados obtenidos se han comparado con el estado de arte correspondiente, y se ha hecho uso de tests estadísticos no paramétricos para comprobar que las diferencias encontradas son significativas desde un punto de vista estadístico, y no son fruto del azar. Adicionalmente, todas las comparaciones realizadas consideran diferentes perspectivas, es decir, se ha analizado rendimiento, eficiencia, precisión así como interpretabilidad en cada uno de los estudios.This Doctoral Thesis aims at solving the challenging problem of associative classification and its application on very large datasets. First, associative classification state-of-art has been studied and analyzed, and a new tool covering the whole taxonomy of algorithms as well as providing many different measures has been proposed. The goal of this tool is two-fold: 1) unification of comparisons, since existing works compare with very different measures; 2) providing a unique tool which has at least one algorithm of each category forming the taxonomy. This tool is a very important advancement in the field, since until the moment the whole taxonomy has not been covered due to that many algorithms have not been released as open source nor they were available to be run. Second, AC has been analyzed on very large quantities of data. In this regard, many different platforms for distributed computing have been studied and different proposals have been developed on them. These proposals enable to deal with very large data in a efficient way scaling up the load on very different compute nodes. Third, as one of the most important part of the associative classification is to extract high quality rules, it has been proposed a novel grammar-guided genetic programming algorithm which enables to obtain interesting association rules with regard to different metrics and in different kinds of data, including truly Big Data datasets. This proposal has proved to obtain very good results in terms of both quality and interpretability, at the same time of providing a very flexible way of representing the solutions and enabling to introduce subjective knowledge in the search process. Then, a novel algorithm has been proposed for associative classification using a non-trivial adaptation of the aforementioned algorithm to obtain the rules forming the classifier. This methodology is also based on grammar-guided genetic programming enabling user not only to constrain the form of the rules, but the final form of the classifier. Results have proved that this algorithm obtains very accurate classifiers at the same time of maintaining a good level of interpretability. All the methodologies proposed along this Thesis has been evaluated using a proper experimental framework, using a varied set of datasets including both classical and Big Data dataset, and analyzing different metrics to quantify the quality of the algorithms with regard to different perspectives. Results have been compared with state-of-the-art and they have been verified by means of non-parametric statistical tests proving that the proposed methods overcome to existing approaches

    An Improved Associative Classification Algorithm based on Incremental Rules

    Get PDF
    In Associative classification (AC), the step of rule generation is necessarily exhaustive because of the inherited search problems from the association rule. Besides which, the entire rules set must be induced prior constructing the classifier. This article proposes a new AC algorithm called Dynamic Covering Associative Classification (DCAC) that learns each rule from a training dataset, removes its classified instances, and then learns the next rule from the remaining unclassified data rather than the original training dataset. This ensures that the exhaustive steps of rule evaluation and candidate generation will no longer be needed, thereby maintaining a real time rule generation process. The proposed algorithm constantly amends the support and confidence for each rule rather restricting itself with the support and confidence computed from the original dataset. Experiments on 20 datasets from different domains showed that the proposed algorithm generates higher quality and more accurate classifiers than other AC rule induction approaches

    Dynamic Rule Covering Classification in Data Mining with Cyber Security Phishing Application

    Get PDF
    Data mining is the process of discovering useful patterns from datasets using intelligent techniques to help users make certain decisions. A typical data mining task is classification, which involves predicting a target variable known as the class in previously unseen data based on models learnt from an input dataset. Covering is a well-known classification approach that derives models with If-Then rules. Covering methods, such as PRISM, have a competitive predictive performance to other classical classification techniques such as greedy, decision tree and associative classification. Therefore, Covering models are appropriate decision-making tools and users favour them carrying out decisions. Despite the use of Covering approach in data processing for different classification applications, it is also acknowledged that this approach suffers from the noticeable drawback of inducing massive numbers of rules making the resulting model large and unmanageable by users. This issue is attributed to the way Covering techniques induce the rules as they keep adding items to the rule’s body, despite the limited data coverage (number of training instances that the rule classifies), until the rule becomes with zero error. This excessive learning overfits the training dataset and also limits the applicability of Covering models in decision making, because managers normally prefer a summarised set of knowledge that they are able to control and comprehend rather a high maintenance models. In practice, there should be a trade-off between the number of rules offered by a classification model and its predictive performance. Another issue associated with the Covering models is the overlapping of training data among the rules, which happens when a rule’s classified data are discarded during the rule discovery phase. Unfortunately, the impact of a rule’s removed data on other potential rules is not considered by this approach. However, When removing training data linked with a rule, both frequency and rank of other rules’ items which have appeared in the removed data are updated. The impacted rules should maintain their true rank and frequency in a dynamic manner during the rule discovery phase rather just keeping the initial computed frequency from the original input dataset. In response to the aforementioned issues, a new dynamic learning technique based on Covering and rule induction, that we call Enhanced Dynamic Rule Induction (eDRI), is developed. eDRI has been implemented in Java and it has been embedded in WEKA machine learning tool. The developed algorithm incrementally discovers the rules using primarily frequency and rule strength thresholds. These thresholds in practice limit the search space for both items as well as potential rules by discarding any with insufficient data representation as early as possible resulting in an efficient training phase. More importantly, eDRI substantially cuts down the number of training examples scans by continuously updating potential rules’ frequency and strength parameters in a dynamic manner whenever a rule gets inserted into the classifier. In particular, and for each derived rule, eDRI adjusts on the fly the remaining potential rules’ items frequencies as well as ranks specifically for those that appeared within the deleted training instances of the derived rule. This gives a more realistic model with minimal rules redundancy, and makes the process of rule induction efficient and dynamic and not static. Moreover, the proposed technique minimises the classifier’s number of rules at preliminary stages by stopping learning when any rule does not meet the rule’s strength threshold therefore minimising overfitting and ensuring a manageable classifier. Lastly, eDRI prediction procedure not only priorities using the best ranked rule for class forecasting of test data but also restricts the use of the default class rule thus reduces the number of misclassifications. The aforementioned improvements guarantee classification models with smaller size that do not overfit the training dataset, while maintaining their predictive performance. The eDRI derived models particularly benefit greatly users taking key business decisions since they can provide a rich knowledge base to support their decision making. This is because these models’ predictive accuracies are high, easy to understand, and controllable as well as robust, i.e. flexible to be amended without drastic change. eDRI applicability has been evaluated on the hard problem of phishing detection. Phishing normally involves creating a fake well-designed website that has identical similarity to an existing business trustful website aiming to trick users and illegally obtain their credentials such as login information in order to access their financial assets. The experimental results against large phishing datasets revealed that eDRI is highly useful as an anti-phishing tool since it derived manageable size models when compared with other traditional techniques without hindering the classification performance. Further evaluation results using other several classification datasets from different domains obtained from University of California Data Repository have corroborated eDRI’s competitive performance with respect to accuracy, number of knowledge representation, training time and items space reduction. This makes the proposed technique not only efficient in inducing rules but also effective

    Feature Extraction and Duplicate Detection for Text Mining: A Survey

    Get PDF
    Text mining, also known as Intelligent Text Analysis is an important research area. It is very difficult to focus on the most appropriate information due to the high dimensionality of data. Feature Extraction is one of the important techniques in data reduction to discover the most important features. Proce- ssing massive amount of data stored in a unstructured form is a challenging task. Several pre-processing methods and algo- rithms are needed to extract useful features from huge amount of data. The survey covers different text summarization, classi- fication, clustering methods to discover useful features and also discovering query facets which are multiple groups of words or phrases that explain and summarize the content covered by a query thereby reducing time taken by the user. Dealing with collection of text documents, it is also very important to filter out duplicate data. Once duplicates are deleted, it is recommended to replace the removed duplicates. Hence we also review the literature on duplicate detection and data fusion (remove and replace duplicates).The survey provides existing text mining techniques to extract relevant features, detect duplicates and to replace the duplicate data to get fine grained knowledge to the user

    Sequential pattern mining with uncertain data

    Get PDF
    In recent years, a number of emerging applications, such as sensor monitoring systems, RFID networks and location based services, have led to the proliferation of uncertain data. However, traditional data mining algorithms are usually inapplicable in uncertain data because of its probabilistic nature. Uncertainty has to be carefully handled; otherwise, it might significantly downgrade the quality of underlying data mining applications. Therefore, we extend traditional data mining algorithms into their uncertain versions so that they still can produce accurate results. In particular, we use a motivating example of sequential pattern mining to illustrate how to incorporate uncertain information in the process of data mining. We use possible world semantics to interpret two typical types of uncertainty: the tuple-level existential uncertainty and the attribute-level temporal uncertainty. In an uncertain database, it is probabilistic that a pattern is frequent or not; thus, we define the concept of probabilistic frequent sequential patterns. And various algorithms are designed to mine probabilistic frequent patterns efficiently in uncertain databases. We also implement our algorithms on distributed computing platforms, such as MapReduce and Spark, so that they can be applied in large scale databases. Our work also includes uncertainty computation in supervised machine learning algorithms. We develop an artificial neural network to classify numeric uncertain data; and a Naive Bayesian classifier is designed for classifying categorical uncertain data streams. We also propose a discretization algorithm to pre-process numerical uncertain data, since many classifiers work with categoric data only. And experimental results in both synthetic and real-world uncertain datasets demonstrate that our methods are effective and efficient

    On-the-fly tracing for data-centric computing : parallelization, workflow and applications

    Get PDF
    As data-centric computing becomes the trend in science and engineering, more and more hardware systems, as well as middleware frameworks, are emerging to handle the intensive computations associated with big data. At the programming level, it is crucial to have corresponding programming paradigms for dealing with big data. Although MapReduce is now a known programming model for data-centric computing where parallelization is completely replaced by partitioning the computing task through data, not all programs particularly those using statistical computing and data mining algorithms with interdependence can be re-factorized in such a fashion. On the other hand, many traditional automatic parallelization methods put an emphasis on formalism and may not achieve optimal performance with the given limited computing resources. In this work we propose a cross-platform programming paradigm, called on-the-fly data tracing , to provide source-to-source transformation where the same framework also provides the functionality of workflow optimization on larger applications. Using a big-data approximation computations related to large-scale data input are identified in the code and workflow and a simplified core dependence graph is built based on the computational load taking in to account big data. The code can then be partitioned into sections for efficient parallelization; and at the workflow level, optimization can be performed by adjusting the scheduling for big-data considerations, including the I/O performance of the machine. Regarding each unit in both source code and workflow as a model, this framework enables model-based parallel programming that matches the available computing resources. The techniques used in model-based parallel programming as well as the design of the software framework for both parallelization and workflow optimization as well as its implementations with multiple programming languages are presented in the dissertation. Then, the following experiments are performed to validate the framework: i) the benchmarking of parallelization speed-up using typical examples in data analysis and machine learning (e.g. naive Bayes, k-means) and ii) three real-world applications in data-centric computing with the framework are also described to illustrate the efficiency: pattern detection from hurricane and storm surge simulations, road traffic flow prediction and text mining from social media data. In the applications, it illustrates how to build scalable workflows with the framework along with performance enhancements
    corecore