4,936 research outputs found

    Computing fuzzy rough approximations in large scale information systems

    Get PDF
    Rough set theory is a popular and powerful machine learning tool. It is especially suitable for dealing with information systems that exhibit inconsistencies, i.e. objects that have the same values for the conditional attributes but a different value for the decision attribute. In line with the emerging granular computing paradigm, rough set theory groups objects together based on the indiscernibility of their attribute values. Fuzzy rough set theory extends rough set theory to data with continuous attributes, and detects degrees of inconsistency in the data. Key to this is turning the indiscernibility relation into a gradual relation, acknowledging that objects can be similar to a certain extent. In very large datasets with millions of objects, computing the gradual indiscernibility relation (or in other words, the soft granules) is very demanding, both in terms of runtime and in terms of memory. It is however required for the computation of the lower and upper approximations of concepts in the fuzzy rough set analysis pipeline. Current non-distributed implementations in R are limited by memory capacity. For example, we found that a state of the art non-distributed implementation in R could not handle 30,000 rows and 10 attributes on a node with 62GB of memory. This is clearly insufficient to scale fuzzy rough set analysis to massive datasets. In this paper we present a parallel and distributed solution based on Message Passing Interface (MPI) to compute fuzzy rough approximations in very large information systems. Our results show that our parallel approach scales with problem size to information systems with millions of objects. To the best of our knowledge, no other parallel and distributed solutions have been proposed so far in the literature for this problem

    NMGRS: Neighborhood-based multigranulation rough sets

    Get PDF
    AbstractRecently, a multigranulation rough set (MGRS) has become a new direction in rough set theory, which is based on multiple binary relations on the universe. However, it is worth noticing that the original MGRS can not be used to discover knowledge from information systems with various domains of attributes. In order to extend the theory of MGRS, the objective of this study is to develop a so-called neighborhood-based multigranulation rough set (NMGRS) in the framework of multigranulation rough sets. Furthermore, by using two different approximating strategies, i.e., seeking common reserving difference and seeking common rejecting difference, we first present optimistic and pessimistic 1-type neighborhood-based multigranulation rough sets and optimistic and pessimistic 2-type neighborhood-based multigranulation rough sets, respectively. Through analyzing several important properties of neighborhood-based multigranulation rough sets, we find that the new rough sets degenerate to the original MGRS when the size of neighborhood equals zero. To obtain covering reducts under neighborhood-based multigranulation rough sets, we then propose a new definition of covering reduct to describe the smallest attribute subset that preserves the consistency of the neighborhood decision system, which can be calculated by Chen’s discernibility matrix approach. These results show that the proposed NMGRS largely extends the theory and application of classical MGRS in the context of multiple granulations

    A Fuzzy Association Rule Mining Expert-Driven (FARME-D) approach to Knowledge Acquisition

    Get PDF
    Fuzzy Association Rule Mining Expert-Driven (FARME-D) approach to knowledge acquisition is proposed in this paper as a viable solution to the challenges of rule-based unwieldiness and sharp boundary problem in building a fuzzy rule-based expert system. The fuzzy models were based on domain experts’ opinion about the data description. The proposed approach is committed to modelling of a compact Fuzzy Rule-Based Expert Systems. It is also aimed at providing a platform for instant update of the knowledge-base in case new knowledge is discovered. The insight to the new approach strategies and underlining assumptions, the structure of FARME-D and its practical application in medical domain was discussed. Also, the modalities for the validation of the FARME-D approach were discussed

    Cloud Service Provider Evaluation System using Fuzzy Rough Set Technique

    Get PDF
    Cloud Service Providers (CSPs) offer a wide variety of scalable, flexible, and cost-efficient services to cloud users on demand and pay-per-utilization basis. However, vast diversity in available cloud service providers leads to numerous challenges for users to determine and select the best suitable service. Also, sometimes users need to hire the required services from multiple CSPs which introduce difficulties in managing interfaces, accounts, security, supports, and Service Level Agreements (SLAs). To circumvent such problems having a Cloud Service Broker (CSB) be aware of service offerings and users Quality of Service (QoS) requirements will benefit both the CSPs as well as users. In this work, we proposed a Fuzzy Rough Set based Cloud Service Brokerage Architecture, which is responsible for ranking and selecting services based on users QoS requirements, and finally monitor the service execution. We have used the fuzzy rough set technique for dimension reduction. Used weighted Euclidean distance to rank the CSPs. To prioritize user QoS request, we intended to use user assign weights, also incorporated system assigned weights to give the relative importance to QoS attributes. We compared the proposed ranking technique with an existing method based on the system response time. The case study experiment results show that the proposed approach is scalable, resilience, and produce better results with less searching time.Comment: 12 pages, 7 figures, and 8 table

    Positive region: An enhancement of partitioning attribute based rough set for categorical data

    Get PDF
    Datasets containing multi-value attributes are often involved in several domains, like pattern recognition, machine learning and data mining. Data partition is required in such cases. Partitioning attributes is the clustering process for the whole data set which is specified for further processing. Recently, there are already existing prominent rough set-based approaches available for group objects and for handling uncertainty data that use indiscernibility attribute and mean roughness measure to perform attribute partitioning. Nevertheless, most of the partitioning attribute methods for selecting partitioning attribute algorithm for categorical data in clustering datasets are incapable of optimal partitioning. This indiscernibility and mean roughness measures, however, require the calculation of the lower approximation, which has less accuracy and it is an expensive task to compute. This reduces the growth of the set of attributes and neglects the data found within the boundary region. This paper presents a new concept called the "Positive Region Based Mean Dependency (PRD)”, that calculates the attribute dependency. In order to determine the mean dependency of the attributes, that is acceptable for categorical datasets, using a positive region-based mean dependency measure, PRD defines the method. By avoiding the lower approximation, PRD is an optimal substitute for the conventional dependency measure in partitioning attribute selection. Contrary to traditional RST partitioning methods, the proposed method can be employed as a measure of data output uncertainty and as a tailback for larger and multiple data clustering. The performance of the method presented is evaluated and compared with the algorithmes of Information-Theoretical Dependence Roughness (ITDR) and Maximum Indiscernible Attribute (MIA)

    A comprehensive study of implicator-conjunctor based and noise-tolerant fuzzy rough sets: definitions, properties and robustness analysis

    Get PDF
    © 2014 Elsevier B.V. Both rough and fuzzy set theories offer interesting tools for dealing with imperfect data: while the former allows us to work with uncertain and incomplete information, the latter provides a formal setting for vague concepts. The two theories are highly compatible, and since the late 1980s many researchers have studied their hybridization. In this paper, we critically evaluate most relevant fuzzy rough set models proposed in the literature. To this end, we establish a formally correct and unified mathematical framework for them. Both implicator-conjunctor-based definitions and noise-tolerant models are studied. We evaluate these models on two different fronts: firstly, we discuss which properties of the original rough set model can be maintained and secondly, we examine how robust they are against both class and attribute noise. By highlighting the benefits and drawbacks of the different fuzzy rough set models, this study appears a necessary first step to propose and develop new models in future research.Lynn D’eer has been supported by the Ghent University Special Research Fund, Chris Cornelis was partially supported by the Spanish Ministry of Science and Technology under the project TIN2011-28488 and the Andalusian Research Plans P11-TIC-7765 and P10-TIC-6858, and by project PYR-2014-8 of the Genil Program of CEI BioTic GRANADA and Lluis Godo has been partially supported by the Spanish MINECO project EdeTRI TIN2012-39348-C02-01Peer Reviewe

    A combined data mining approach using rough set theory and case-based reasoning in medical datasets

    Get PDF
    Case-based reasoning (CBR) is the process of solving new cases by retrieving the most relevant ones from an existing knowledge-base. Since, irrelevant or redundant features not only remarkably increase memory requirements but also the time complexity of the case retrieval, reducing the number of dimensions is an issue worth considering. This paper uses rough set theory (RST) in order to reduce the number of dimensions in a CBR classifier with the aim of increasing accuracy and efficiency. CBR exploits a distance based co-occurrence of categorical data to measure similarity of cases. This distance is based on the proportional distribution of different categorical values of features. The weight used for a feature is the average of co-occurrence values of the features. The combination of RST and CBR has been applied to real categorical datasets of Wisconsin Breast Cancer, Lymphography, and Primary cancer. The 5-fold cross validation method is used to evaluate the performance of the proposed approach. The results show that this combined approach lowers computational costs and improves performance metrics including accuracy and interpretability compared to other approaches developed in the literature
    • …
    corecore