144,633 research outputs found

    Heuristic Approaches for Generating Local Process Models through Log Projections

    Full text link
    Local Process Model (LPM) discovery is focused on the mining of a set of process models where each model describes the behavior represented in the event log only partially, i.e. subsets of possible events are taken into account to create so-called local process models. Often such smaller models provide valuable insights into the behavior of the process, especially when no adequate and comprehensible single overall process model exists that is able to describe the traces of the process from start to end. The practical application of LPM discovery is however hindered by computational issues in the case of logs with many activities (problems may already occur when there are more than 17 unique activities). In this paper, we explore three heuristics to discover subsets of activities that lead to useful log projections with the goal of speeding up LPM discovery considerably while still finding high-quality LPMs. We found that a Markov clustering approach to create projection sets results in the largest improvement of execution time, with discovered LPMs still being better than with the use of randomly generated activity sets of the same size. Another heuristic, based on log entropy, yields a more moderate speedup, but enables the discovery of higher quality LPMs. The third heuristic, based on the relative information gain, shows unstable performance: for some data sets the speedup and LPM quality are higher than with the log entropy based method, while for other data sets there is no speedup at all.Comment: paper accepted and to appear in the proceedings of the IEEE Symposium on Computational Intelligence and Data Mining (CIDM), special session on Process Mining, part of the Symposium Series on Computational Intelligence (SSCI

    Data mining approaches in business intelligence: postgraduate data analytic

    Get PDF
    Over recent years, there has been tremendous growth of interest in business intelligence (BI) for higher education. BI analysis solutions are operated to extract useful information from a multi-dimensional datasets. However, higher education-based business intelligence is complex to build, maintain and it faces the knowledge constraints. Therefore, data mining techniques provide an effective computational methods for higher educationbased business intelligence. The main purpose of using data mining approaches in business intelligence is to provide decision making solution to higher education management. This paper presents the implementation of data mining approaches in business intelligence using a total of 13508 postgraduates (PG) data. These PG data are to allow the research to identify the postgraduates who Graduate On Time (GOT) via business intelligence process integrating data mining approaches. There are four layers will be discussed in this paper: data source layer (Layer 1), data integration layer (Layer 2), logic layer (Layer 3), and reporting layer (Layer 4). The main scope of this paper is to identify suitable data mining which is to allow decision making on GOT so as to an appropriate analysis to education management on GOT. The results show that Support Vector Machine (SVM) classifier is with better accuracy of 99%. Hence, the contribution of data mining in business intelligence allows an accurate decision making in higher education

    Fuzzy Modeling of Client Preference in Data-Rich Marketing Environments

    Get PDF
    Advances in computational methods have led, in the world of financial services, to huge databases of client and market information. In the past decade, various computational intelligence (CI) techniques have been applied in mining this data for obtaining knowledge and in-depth information about the clients and the markets. This paper discusses the application of fuzzy clustering in target selection from large databases for direct marketing (DM) purposes. Actual data from the campaigns of a large financial services provider are used as a test case. The results obtained with the fuzzy clustering approach are compared with those resulting from the current practice of using statistical tools for target selection.fuzzy clustering;direct marketing;client segmentation;fuzzy systems

    Viable system architecture applied to maintenance 4.0

    Get PDF
    Câmara, R. A., Mamede, H. S., & Dos Santos, V. D. (2019). Viable system architecture applied to maintenance 4.0. In A. P. Abraham, J. Roth, & L. Rodrigues (Eds.), Multi Conference on Computer Science and Information Systems, MCCSIS 2019 - Proceedings of the International Conferences on Big Data Analytics, Data Mining and Computational Intelligence 2019 and Theory and Practice in Modern Computing 2019 (pp. 127-134). (Multi Conference on Computer Science and Information Systems, MCCSIS 2019 - Proceedings of the International Conferences on Big Data Analytics, Data Mining and Computational Intelligence 2019 and Theory and Practice in Modern Computing 2019). IADIS Press.Disruptive requirements that currently drive the so-called Industry 4.0 (I4.0) are increasingly present in today's industries, where factories are forced to innovate in search of improvement in the quality of manufacturing of products aligned with the reductions of: manufacturing time, environmental and cost impacts with the manufacturing process. For this, an Information Systems (IS) architecture is proposed to reduce the negative impacts on an industrial operation caused by manual configuration failures in manufacturing systems, machines that are worn out in the production process and unstable integrations between industrial subsystems. The suggested SI model uses the Viable Systems Model adapted to Maintenance 4.0 technologies (Cyber-physical Systems (CPS), Manufacturing Execution Systems (MES), Data Mining and Digital Manufacturing concepts/technologies) with the goal to create an automatic purchase flow to replace parts by mitigating impending failures in industrial equipment through data mining and predictive analysis.publishersversionpublishe

    Intelligent Financial Fraud Detection Practices: An Investigation

    Full text link
    Financial fraud is an issue with far reaching consequences in the finance industry, government, corporate sectors, and for ordinary consumers. Increasing dependence on new technologies such as cloud and mobile computing in recent years has compounded the problem. Traditional methods of detection involve extensive use of auditing, where a trained individual manually observes reports or transactions in an attempt to discover fraudulent behaviour. This method is not only time consuming, expensive and inaccurate, but in the age of big data it is also impractical. Not surprisingly, financial institutions have turned to automated processes using statistical and computational methods. This paper presents a comprehensive investigation on financial fraud detection practices using such data mining methods, with a particular focus on computational intelligence-based techniques. Classification of the practices based on key aspects such as detection algorithm used, fraud type investigated, and success rate have been covered. Issues and challenges associated with the current practices and potential future direction of research have also been identified.Comment: Proceedings of the 10th International Conference on Security and Privacy in Communication Networks (SecureComm 2014

    Fuzzy Modeling of Client Preference in Data-Rich Marketing Environments

    Get PDF
    Advances in computational methods have led, in the world of financial services, to huge databases of client and market information. In the past decade, various computational intelligence (CI) techniques have been applied in mining this data for obtaining knowledge and in-depth information about the clients and the markets. This paper discusses the application of fuzzy clustering in target selection from large databases for direct marketing (DM) purposes. Actual data from the campaigns of a large financial services provider are used as a test case. The results obtained with the fuzzy clustering approach are compared with those resulting from the current practice of using statistical tools for target selection

    Redefining biomaterial biocompatibility: challenges for artificial intelligence and text mining

    Get PDF
    The surge in ‘Big data’ has significantly influenced biomaterials research and development, with vast data volumes emerging from clinical trials, scientific literature, electronic health records, and other sources. Biocompatibility is essential in developing safe medical devices and biomaterials to perform as intended without provoking adverse reactions. Therefore, establishing an artificial intelligence (AI)-driven biocompatibility definition has become decisive for automating data extraction and profiling safety effectiveness. This definition should both reflect the attributes related to biocompatibility and be compatible with computational data-mining methods. Here, we discuss the need for a comprehensive and contemporary definition of biocompatibility and the challenges in developing one. We also identify the key elements that comprise biocompatibility, and propose an integrated biocompatibility definition that enables data-mining approaches.Peer ReviewedPostprint (published version
    corecore