8 research outputs found

    Using the symmetrical Tau criterion for feature selection decision tree and neural network learning

    Get PDF
    The data collected for various domain purposes usually contains some features irrelevant tothe concept being learned. The presence of these features interferes with the learning mechanism and as a result the predicted models tend to be more complex and less accurate. It is important to employ an effective feature selection strategy so that only the necessary and significant features will be used to learn the concept at hand. The Symmetrical Tau (t) [13] is a statistical-heuristic measure for the capability of an attribute in predicting the class of another attribute, and it has successfully been used as a feature selection criterion during decision tree construction. In this paper we aim to demonstrate some other ways of effectively using the t criterion to filter out the irrelevant features prior to learning (pre-pruning) and after the learning process (post-pruning). For the pre-pruning approach we perform two experiments, one where the irrelevant features are filtered out according to their t value, and one where we calculate the t criterion for Boolean combinations of features and use the highest t-valued combination. In the post-pruning approach we use the t criterion to prune a trained neural network and thereby obtain a more accurate and simple rule set. The experiments are performed on data characterized by continuous and categorical attributes and the effectiveness of the proposed techniques is demonstrated by comparing the derived knowledge models in terms of complexity and accuracy

    Data mining using fuzzy theory for customer relationship management

    Get PDF
    Customer Relationship Management (CRM) initiatives have gained much attention over the past few years. Although CRM involves technology, the important success factor involves strategy. This is the strategy of building business around the customer. However, with the aid of data mining techniques, business can formulate the strategy easier. Fuzzy theory allows human expertise and decisions to be modelled more closely, thus it is suggested that it can be used in the CRM model. A case study presented in this paper has examined two area of a typical CRM model, where the decision making process can be improved by using fuzzy theory

    Data abstractions for decision tree induction

    Get PDF
    AbstractWhen descriptions of data values in a database are too concrete or too detailed, the computational complexity needed to discover useful knowledge from the database will be generally increased. Furthermore, discovered knowledge tends to become complicated. A notion of data abstraction seems useful to resolve this kind of problems, as we obtain a smaller and more general database after the abstraction, from which we can quickly extract more abstract knowledge that is expected to be easier to understand. In general, however, since there exist several possible abstractions, we have to carefully select one according to which the original database is generalized. An inadequate selection would make the accuracy of extracted knowledge worse.From this point of view, we propose in this paper a method of selecting an appropriate abstraction from possible ones, assuming that our task is to construct a decision tree from a relational database. Suppose that, for each attribute in a relational database, we have a class of possible abstractions for the attribute values. As an appropriate abstraction for each attribute, we prefer an abstraction such that, even after the abstraction, the distribution of target classes necessary to perform our classification task can be preserved within an acceptable error range given by user.By the selected abstractions, the original database can be transformed into a small generalized database written in abstract values. Therefore, it would be expected that, from the generalized database, we can construct a decision tree whose size is much smaller than one constructed from the original database. Furthermore, such a size reduction can be justified under some theoretical assumptions. The appropriateness of abstraction is precisely defined in terms of the standard information theory. Therefore, we call our abstraction framework Information Theoretical Abstraction.We show some experimental results obtained by a system ITA that is an implementation of our abstraction method. From those results, it is verified that our method is very effective in reducing the size of detected decision tree without making classification errors so worse

    Data abstractions for decision tree induction

    Get PDF
    AbstractWhen descriptions of data values in a database are too concrete or too detailed, the computational complexity needed to discover useful knowledge from the database will be generally increased. Furthermore, discovered knowledge tends to become complicated. A notion of data abstraction seems useful to resolve this kind of problems, as we obtain a smaller and more general database after the abstraction, from which we can quickly extract more abstract knowledge that is expected to be easier to understand. In general, however, since there exist several possible abstractions, we have to carefully select one according to which the original database is generalized. An inadequate selection would make the accuracy of extracted knowledge worse.From this point of view, we propose in this paper a method of selecting an appropriate abstraction from possible ones, assuming that our task is to construct a decision tree from a relational database. Suppose that, for each attribute in a relational database, we have a class of possible abstractions for the attribute values. As an appropriate abstraction for each attribute, we prefer an abstraction such that, even after the abstraction, the distribution of target classes necessary to perform our classification task can be preserved within an acceptable error range given by user.By the selected abstractions, the original database can be transformed into a small generalized database written in abstract values. Therefore, it would be expected that, from the generalized database, we can construct a decision tree whose size is much smaller than one constructed from the original database. Furthermore, such a size reduction can be justified under some theoretical assumptions. The appropriateness of abstraction is precisely defined in terms of the standard information theory. Therefore, we call our abstraction framework Information Theoretical Abstraction.We show some experimental results obtained by a system ITA that is an implementation of our abstraction method. From those results, it is verified that our method is very effective in reducing the size of detected decision tree without making classification errors so worse

    Growing Simpler Decision Trees to Facilitate Knowledge Discovery

    No full text
    When using machine learning techniques for knowledge discovery, output that is comprehensible to a human is as important as predictive accuracy. We introduce a new algorithm, SET-Gen, that improves the comprehensibility of decision trees grown by standard C4.5 without reducing accuracy. It does this by using genetic search to select the set of input features C4.5 is allowed to use to build its tree. We test SETGen on a wide variety of real-world datasets and show that SET-Gen trees are significantly smaller and reference significantly fewer features than trees grown by C4.5 without using SET-Gen. Statistical significance tests show that the accuracies of SET-Gen's trees are either not distinguishable from or are more accurate than those of the original C4.5 trees on all ten datasets tested. Introduction One approach to knowledge discovery in databases (DBs) is to apply inductive learning algorithms to derive models of interesting aspects of the data. The predictive accuracy of su..

    Métodos de aprendizagem automática: um estudo baseado na avaliação e previsão de clientes bancários

    Get PDF
    Project Work presented as the partial requirement for obtaining a Master's degree in Information Management, specialization in Knowledge Management and Business IntelligenceData Mining surge, hoje em dia, como uma ferramenta importante e crucial para o sucesso de um negócio. O considerável volume de dados que atualmente se encontra disponível, por si só, não traz valor acrescentado. No entanto, as ferramentas de Data Mining, capazes de transformar dados e mais dados em conhecimento, vêm colmatar esta lacuna, constituindo, assim, um trunfo que ninguém quer perder. O presente trabalho foca-se na utilização das técnicas de Data Mining no âmbito da atividade bancária, mais concretamente na sua atividade de telemarketing. Neste trabalho são aplicados catorze algoritmos a uma base de dados proveniente do call center de um banco português, resultante de uma campanha para a angariação de clientes para depósitos a prazo com taxas de juro favoráveis. Os catorze algoritmos aplicados no caso prático deste projeto podem ser agrupados em sete grupos: Árvores de Decisão, Redes Neuronais, Support Vector Machine, Voted Perceptron, métodos Ensemble, aprendizagem Bayesiana e Regressões. De forma a beneficiar, ainda mais, do que a área de Data Mining tem para oferecer, este trabalho incide ainda sobre o redimensionamento da base de dados em questão, através da aplicação de duas estratégias de seleção de atributos: Best First e Genetic Search. Um dos objetivos deste trabalho prende-se com a comparação dos resultados obtidos com os resultados presentes no estudo dos autores Sérgio Moro, Raul Laureano e Paulo Cortez (Sérgio Moro, Laureano, & Cortez, 2011). Adicionalmente, pretende-se identificar as variáveis mais relevantes aquando da identificação do potencial cliente deste produto financeiro. Como principais conclusões, depreende-se que os resultados obtidos são comparáveis com os resultados publicados pelos autores mencionados, sendo os mesmos de qualidade e consistentes. O algoritmo Bagging é o que apresenta melhores resultados e a variável referente à duração da chamada telefónica é a que mais influencia o sucesso de campanhas similares

    Predictive Modelling Approach to Data-driven Computational Psychiatry

    Get PDF
    This dissertation contributes with novel predictive modelling approaches to data-driven computational psychiatry and offers alternative analyses frameworks to the standard statistical analyses in psychiatric research. In particular, this document advances research in medical data mining, especially psychiatry, via two phases. In the first phase, this document promotes research by proposing synergistic machine learning and statistical approaches for detecting patterns and developing predictive models in clinical psychiatry data to classify diseases, predict treatment outcomes or improve treatment selections. In particular, these data-driven approaches are built upon several machine learning techniques whose predictive models have been pre-processed, trained, optimised, post-processed and tested in novel computationally intensive frameworks. In the second phase, this document advances research in medical data mining by proposing several novel extensions in the area of data classification by offering a novel decision tree algorithm, which we call PIDT, based on parameterised impurities and statistical pruning approaches toward building more accurate decision trees classifiers and developing new ensemblebased classification methods. In particular, the experimental results show that by building predictive models with the novel PIDT algorithm, these models primarily led to better performance regarding accuracy and tree size than those built with traditional decision trees. The contributions of the proposed dissertation can be summarised as follow. Firstly, several statistical and machine learning algorithms, plus techniques to improve these algorithms, are explored. Secondly, prediction modelling and pattern detection approaches for the first-episode psychosis associated with cannabis use are developed. Thirdly, a new computationally intensive machine learning framework for understanding the link between cannabis use and first-episode psychosis was introduced. Then, complementary and equally sophisticated prediction models for the first-episode psychosis associated with cannabis use were developed using artificial neural networks and deep learning within the proposed novel computationally intensive framework. Lastly, an efficient novel decision tree algorithm (PIDT) based on novel parameterised impurities and statistical pruning approaches is proposed and tested with several medical datasets. These contributions can be used to guide future theory, experiment, and treatment development in medical data mining, especially psychiatry
    corecore