31 research outputs found
Covering rough sets based on neighborhoods: An approach without using neighborhoods
Rough set theory, a mathematical tool to deal with inexact or uncertain
knowledge in information systems, has originally described the indiscernibility
of elements by equivalence relations. Covering rough sets are a natural
extension of classical rough sets by relaxing the partitions arising from
equivalence relations to coverings. Recently, some topological concepts such as
neighborhood have been applied to covering rough sets. In this paper, we
further investigate the covering rough sets based on neighborhoods by
approximation operations. We show that the upper approximation based on
neighborhoods can be defined equivalently without using neighborhoods. To
analyze the coverings themselves, we introduce unary and composition operations
on coverings. A notion of homomorphismis provided to relate two covering
approximation spaces. We also examine the properties of approximations
preserved by the operations and homomorphisms, respectively.Comment: 13 pages; to appear in International Journal of Approximate Reasonin
Extended Tolerance Relation to Define a New Rough Set Model in Incomplete Information Systems
This paper discusses and proposes a rough set model for an incomplete information system, which defines an extended tolerance relation using frequency of attribute values in such a system. It first discusses some rough set extensions in incomplete information systems. Next, “probability of matching” is defined from data in information systems and then measures the degree of tolerance. Consequently, a rough set model is developed using a tolerance relation defined with a threshold. The paper discusses the mathematical properties of the newly developed rough set model and also introduces a method to derive reducts and the core
Rough Set on Concept Lattice
A new type formulation of rough set theory can be developed on a binary relation by association on the elements of a universe of finite set of objects with the elements of another universe of finite set of properties.. This paper presents generalization of Pawlak rough set approximation operators on the concept lattices. The notion of rough set approximation is to approximate an undefinable set or concepts through two definable sets. We analyze these and from the results one can obtain a better understanding of data analysis using formal concept analysis and rough set theory. Key Words: Rough Set, Lattice, Formal Concept, Approximation operators
A Logic Approach to Granular computing
This article was originally published by the International Journal of Cognitive Informatics and Natural IntelligenceGranular computing is an emerging field of study that attempts to formalize and explore methods and
heuristics of human problem solving with multiple levels of granularity and abstraction. A fundamental
issue of granular computing is the representation and utilization of granular structures. The main objective
of this article is to examine a logic approach to address this issue. Following the classical interpretation
of a concept as a pair of intension and extension, we interpret a granule as a pair of a set of objects and a
logic formula describing the granule. The building blocks of granular structures are basic granules representing an elementary concept or a piece of knowledge. They are treated as atomic formulas of a logic
language. Different types of granular structures can be constructed by using logic connectives. Within
this logic framework, we show that rough set analysis (RSA) and formal concept analysis (FCA) can be
interpreted uniformly. The two theories use multilevel granular structures but differ in their choices of
definable granules and granular structures.NSERC Canada Discovery gran
Experiments on Incomplete Data Sets Using Modifications to Characteristic Relation
Rough set theory is a useful approach for decision rule induction which is applied to large life data sets. Lower and upper approximations of concept values are used to induce rules for incomplete data sets. In our research we will study validity of modifications suggested to characteristic relation. We discuss the implementation of modifications to characteristic relation, and the local definability of each modified set.We show that all suggested modification sets are not locally definable except for maximal consistent blocks that are restricted to data set with "do not care" conditions. A comparative analysis was conducted for characteristic sets and modifications in terms of cardinality of lower and upper approximations of each concept and decision rules induced by each modification. In this research, experiments were conducted on four incomplete data sets with lost and do not care conditions. LEM2 algorithm was implemented to induce certain and possible rules from the incomplete data set. To measure the classification average error rate for induced rules, ten-fold cross validation was implemented. Our results show that there is no significant difference between the qualities of rule induced from each modification
Three-valued logics, uncertainty management and rough sets
This paper is a survey of the connections between three-valued logics and rough sets from the point of view of incomplete information management. Based on the fact that many three-valued logics can be put under a unique algebraic umbrella, we show how to translate three-valued conjunctions and implications into operations on ill-known sets such as rough sets. We then show that while such translations may provide mathematically elegant algebraic settings for rough sets, the interpretability of these connectives in terms of an original set approximated via an equivalence relation is very limited, thus casting doubts on the practical relevance of truth-functional logical renderings of rough sets
On Probability of Matching in Probability Based Rough Set Definitions
Abstract-The original rough set theory deals with precise and complete data, while real applications frequently contain imperfect information. A typical imperfect data studied in rough set research is the missing values. Though there are many ideas proposed to solve the issue in the literature, the paper adopts a probabilistic approach, because it can incorporate other types of imperfect data including imprecise and uncertain values in a single approach. The paper first discusses probabilities of attribute values assuming different type of attributes in real applications, and proposes a generalized method of probability of matching. It also discusses the case of continuous data as well as discrete one. The proposed probability of matching could be used for defining valued tolerance/similarity relations in rough set approaches