27,222 research outputs found
Typicality, graded membership, and vagueness
This paper addresses theoretical problems arising from the vagueness of language terms, and intuitions of the vagueness of the concepts to which they refer. It is argued that the central intuitions of prototype theory are sufficient to account for both typicality phenomena and psychological intuitions about degrees of membership in vaguely defined classes. The first section explains the importance of the relation between degrees of membership and typicality (or goodness of example) in conceptual categorization. The second and third section address arguments advanced by Osherson and Smith (1997), and Kamp and Partee (1995), that the two notions of degree of membership and typicality must relate to fundamentally different aspects of conceptual representations. A version of prototype theory—the Threshold Model—is proposed to counter these arguments and three possible solutions to the problems of logical selfcontradiction and tautology for vague categorizations are outlined. In the final section graded membership is related to the social construction of conceptual boundaries maintained through language use
Knowledge Engineering from Data Perspective: Granular Computing Approach
The concept of rough set theory is a mathematical approach to uncertainly and vagueness in data analysis, introduced by Zdzislaw Pawlak in 1980s. Rough set theory assumes the underlying structure of knowledge is a partition. We have extended Pawlak’s concept of knowledge to coverings. We have taken a soft approach regarding any generalized subset as a basic knowledge. We regard a covering as basic knowledge from which the theory of knowledge approximations and learning, knowledge dependency and reduct are developed
The SP theory of intelligence: benefits and applications
This article describes existing and expected benefits of the "SP theory of
intelligence", and some potential applications. The theory aims to simplify and
integrate ideas across artificial intelligence, mainstream computing, and human
perception and cognition, with information compression as a unifying theme. It
combines conceptual simplicity with descriptive and explanatory power across
several areas of computing and cognition. In the "SP machine" -- an expression
of the SP theory which is currently realized in the form of a computer model --
there is potential for an overall simplification of computing systems,
including software. The SP theory promises deeper insights and better solutions
in several areas of application including, most notably, unsupervised learning,
natural language processing, autonomous robots, computer vision, intelligent
databases, software engineering, information compression, medical diagnosis and
big data. There is also potential in areas such as the semantic web,
bioinformatics, structuring of documents, the detection of computer viruses,
data fusion, new kinds of computer, and the development of scientific theories.
The theory promises seamless integration of structures and functions within and
between different areas of application. The potential value, worldwide, of
these benefits and applications is at least $190 billion each year. Further
development would be facilitated by the creation of a high-parallel,
open-source version of the SP machine, available to researchers everywhere.Comment: arXiv admin note: substantial text overlap with arXiv:1212.022
New Learning Models for Generating Classification Rules Based on Rough Set Approach
Data sets, static or dynamic, are very important and useful for presenting real life
features in different aspects of industry, medicine, economy, and others. Recently,
different models were used to generate knowledge from vague and uncertain data
sets such as induction decision tree, neural network, fuzzy logic, genetic algorithm,
rough set theory, and others. All of these models take long time to learn for a huge
and dynamic data set. Thus, the challenge is how to develop an efficient model that
can decrease the learning time without affecting the quality of the generated
classification rules. Huge information systems or data sets usually have some
missing values due to unavailable data that affect the quality of the generated
classification rules. Missing values lead to the difficulty of extracting useful
information from that data set. Another challenge is how to solve the problem of
missing data. Rough set theory is a new mathematical tool to deal with vagueness and uncertainty.
It is a useful approach for uncovering classificatory knowledge and building a
classification rules. So, the application of the theory as part of the learning models
was proposed in this thesis.
Two different models for learning in data sets were proposed based on two different
reduction algorithms. The split-condition-merge-reduct algorithm ( SCMR) was
performed on three different modules: partitioning the data set vertically into subsets,
applying rough set concepts of reduction to each subset, and merging the reducts of
all subsets to form the best reduct. The enhanced-split-condition-merge-reduct
algorithm (E SCMR) was performed on the above three modules followed by another
module that applies the rough set reduction concept again to the reduct generated by
SCMR in order to generate the best reduct, which plays the same role as if all
attributes in this subset existed. Classification rules were generated based on the best
reduct.
For the problem of missing data, a new approach was proposed based on data
partitioning and function mode. In this new approach, the data set was partitioned
horizontally into different subsets. All objects in each subset of data were described
by only one classification value. The mode function was applied to each subset of
data that has missing values in order to find the most frequently occurring value in
each attribute. Missing values in that attribute were replaced by the mode value.
The proposed approach for missing values produced better results compared to other
approaches. Also, the proposed models for learning in data sets generated the classification rules faster than other methods. The accuracy of the classification rules
by the proposed models was high compared to other models
Recommended from our members
Generics as reflecting conceptual knowledge
Generics are proposed to reflect the content of the conceptual system, whose prototype structure and vague boundaries make an unreliable basis for traditional treatments of truth and logic. Examples from the psychological literature are used to illustrate the relation between generics, similarity‐based reasoning and concepts
Rough matroids based on coverings
The introduction of covering-based rough sets has made a substantial
contribution to the classical rough sets. However, many vital problems in rough
sets, including attribution reduction, are NP-hard and therefore the algorithms
for solving them are usually greedy. Matroid, as a generalization of linear
independence in vector spaces, it has a variety of applications in many fields
such as algorithm design and combinatorial optimization. An excellent
introduction to the topic of rough matroids is due to Zhu and Wang. On the
basis of their work, we study the rough matroids based on coverings in this
paper. First, we investigate some properties of the definable sets with respect
to a covering. Specifically, it is interesting that the set of all definable
sets with respect to a covering, equipped with the binary relation of inclusion
, constructs a lattice. Second, we propose the rough matroids based
on coverings, which are a generalization of the rough matroids based on
relations. Finally, some properties of rough matroids based on coverings are
explored. Moreover, an equivalent formulation of rough matroids based on
coverings is presented. These interesting and important results exhibit many
potential connections between rough sets and matroids.Comment: 15page
- …