271 research outputs found
Covering rough sets based on neighborhoods: An approach without using neighborhoods
Rough set theory, a mathematical tool to deal with inexact or uncertain
knowledge in information systems, has originally described the indiscernibility
of elements by equivalence relations. Covering rough sets are a natural
extension of classical rough sets by relaxing the partitions arising from
equivalence relations to coverings. Recently, some topological concepts such as
neighborhood have been applied to covering rough sets. In this paper, we
further investigate the covering rough sets based on neighborhoods by
approximation operations. We show that the upper approximation based on
neighborhoods can be defined equivalently without using neighborhoods. To
analyze the coverings themselves, we introduce unary and composition operations
on coverings. A notion of homomorphismis provided to relate two covering
approximation spaces. We also examine the properties of approximations
preserved by the operations and homomorphisms, respectively.Comment: 13 pages; to appear in International Journal of Approximate Reasonin
Information completeness in Nelson algebras of rough sets induced by quasiorders
In this paper, we give an algebraic completeness theorem for constructive
logic with strong negation in terms of finite rough set-based Nelson algebras
determined by quasiorders. We show how for a quasiorder , its rough
set-based Nelson algebra can be obtained by applying the well-known
construction by Sendlewski. We prove that if the set of all -closed
elements, which may be viewed as the set of completely defined objects, is
cofinal, then the rough set-based Nelson algebra determined by a quasiorder
forms an effective lattice, that is, an algebraic model of the logic ,
which is characterised by a modal operator grasping the notion of "to be
classically valid". We present a necessary and sufficient condition under which
a Nelson algebra is isomorphic to a rough set-based effective lattice
determined by a quasiorder.Comment: 15 page
Autonomous clustering using rough set theory
This paper proposes a clustering technique that minimises the need for subjective
human intervention and is based on elements of rough set theory. The proposed algorithm is
unified in its approach to clustering and makes use of both local and global data properties to
obtain clustering solutions. It handles single-type and mixed attribute data sets with ease and
results from three data sets of single and mixed attribute types are used to illustrate the
technique and establish its efficiency
Rough sets based on Galois connections
Rough set theory is an important tool to extract knowledge from relational databases. The original definitions of approximation operators are based on an indiscernibility relation, which is an equivalence one. Lately. different papers have motivated the possibility of considering arbitrary relations. Nevertheless, when those are taken into account, the original definitions given by Pawlak may lose fundamental properties. This paper proposes a possible solution to the arising problems by presenting an alternative definition of approximation operators based on the closure and interior operators obtained from an isotone Galois connection. We prove that the proposed definition satisfies interesting properties and that it also improves object classification tasks
Dominance-based Rough Set Approach, basic ideas and main trends
Dominance-based Rough Approach (DRSA) has been proposed as a machine learning
and knowledge discovery methodology to handle Multiple Criteria Decision Aiding
(MCDA). Due to its capacity of asking the decision maker (DM) for simple
preference information and supplying easily understandable and explainable
recommendations, DRSA gained much interest during the years and it is now one
of the most appreciated MCDA approaches. In fact, it has been applied also
beyond MCDA domain, as a general knowledge discovery and data mining
methodology for the analysis of monotonic (and also non-monotonic) data. In
this contribution, we recall the basic principles and the main concepts of
DRSA, with a general overview of its developments and software. We present also
a historical reconstruction of the genesis of the methodology, with a specific
focus on the contribution of Roman S{\l}owi\'nski.Comment: This research was partially supported by TAILOR, a project funded by
European Union (EU) Horizon 2020 research and innovation programme under GA
No 952215. This submission is a preprint of a book chapter accepted by
Springer, with very few minor differences of just technical natur
HANDLING MISSING ATTRIBUTE VALUES IN DECISION TABLES USING VALUED TOLERANCE APPROACH
Rule induction is one of the key areas in data mining as it is applied to a large number of real life data. However, in such real life data, the information is incompletely specified most of the time. To induce rules from these incomplete data, more powerful algorithms are necessary. This research work mainly focuses on a probabilistic approach based on the valued tolerance relation. This thesis is divided into two parts. The first part describes the implementation of the valued tolerance relation. The induced rules are then evaluated based on the error rate due to incorrectly classified and unclassified examples. The second part of this research work shows a comparison of the rules induced by the MLEM2 algorithm that has been implemented before, with the rules induced by the valued tolerance based approach which was implemented as part of this research. Hence, through this thesis, the error rate for the MLEM2 algorithm and the valued tolerance based approach are compared and the results are documented
- âŠ