165 research outputs found
Information Flow Model for Commercial Security
Information flow in Discretionary Access Control (DAC) is a well-known difficult problem. This paper formalizes the fundamental concepts and establishes a theory of information flow security. A DAC system is information flow secure (IFS), if any data never flows into the hands of owner’s enemies (explicitly denial access list.
Optimal Categorical Attribute Transformation for Granularity Change in Relational Databases for Binary Decision Problems in Educational Data Mining
This paper presents an approach for transforming data granularity in
hierarchical databases for binary decision problems by applying regression to
categorical attributes at the lower grain levels. Attributes from a lower
hierarchy entity in the relational database have their information content
optimized through regression on the categories histogram trained on a small
exclusive labelled sample, instead of the usual mode category of the
distribution. The paper validates the approach on a binary decision task for
assessing the quality of secondary schools focusing on how logistic regression
transforms the students and teachers attributes into school attributes.
Experiments were carried out on Brazilian schools public datasets via 10-fold
cross-validation comparison of the ranking score produced also by logistic
regression. The proposed approach achieved higher performance than the usual
distribution mode transformation and equal to the expert weighing approach
measured by the maximum Kolmogorov-Smirnov distance and the area under the ROC
curve at 0.01 significance level.Comment: 5 pages, 2 figures, 2 table
A Method to Construct an Extension of Fuzzy Information Granularity Based on Fuzzy Distance
In fuzzy granular computing, a fuzzy granular structure is the collection of
fuzzy information granules and fuzzy information granularity is used to
measure the granulation degree of a fuzzy granular structure.
In general, the fuzzy information granularity characterizes discernibility ability
among fuzzy information granules in a fuzzy granular structure. In recent years,
researchers have proposed some concepts of fuzzy information granularity based
on partial order relations. However, the existing forms of fuzzy information granularity
have some limitations when evaluating the fineness/coarseness between two fuzzy
granular structures. In this paper, we propose an extension of fuzzy information
granularity based on a fuzzy distance measure.
We prove theoretically and experimentally that the proposed fuzzy information
granularity is the best one to distinguish fuzzy granular structures.
ACM Computing Classification System (1998): I.5.2, I.2.6
Recommended from our members
Granular computing approach for intelligent classifier design
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University London.Granular computing facilitates dealing with information by providing a theoretical framework to deal with information as granules at different levels of granularity (different levels of specificity/abstraction). It aims to provide an abstract explainable description of the data by forming granules that represent the features or the
underlying structure of corresponding subsets of the data. In this thesis, a granular computing approach to the design of intelligent classification systems is proposed. The proposed approach is employed for different
classification systems to investigate its efficiency. Fuzzy inference systems, neural networks, neuro-fuzzy systems and classifier ensembles are considered to evaluate the efficiency of the proposed approach. Each of the considered systems is designed using the proposed approach and classification performance is evaluated and compared to that of the standard system. The proposed approach is based on constructing information granules from data at multiple levels of granularity. The granulation process is performed using a modified fuzzy c-means algorithm that takes classification problem into account. Clustering is followed by a coarsening process that involves merging small clusters into large ones to form a lower granularity level. The resulted granules are used to build each of the considered binary classifiers in different settings and approaches.
Granules produced by the proposed granulation method are used to build a fuzzy classifier for each granulation level or set of levels. The performance of the classifiers is evaluated using real life data sets and measured by two classification performance measures: accuracy and area under receiver operating characteristic curve. Experimental results show that fuzzy systems constructed using the proposed method achieved better classification performance. In addition, the proposed approach is used for the design of neural network classifiers. Resulted granules from one or more granulation levels are used to train the classifiers at different levels of specificity/abstraction. Using this approach, the classification problem is broken down into the modelling of classification rules represented by the information granules resulting in more interpretable system. Experimental results show that neural network classifiers trained using the proposed approach have better classification performance for most of the data sets. In a similar manner, the proposed approach is used for the training of neuro-fuzzy systems resulting in similar improvement in classification performance. Lastly, neural networks built using the proposed approach are used to construct a classifier ensemble. Information granules are used to generate and train the base classifiers. The final ensemble output is produced by a weighted sum combiner. Based on the experimental results, the proposed approach has improved the classification performance of the base classifiers for most of the data sets. Furthermore, a genetic algorithm is used to determine the combiner weights automatically.Higher Committee for Education Development in Iraq (HCED
GBG++: A Fast and Stable Granular Ball Generation Method for Classification
Granular ball computing (GBC), as an efficient, robust, and scalable learning
method, has become a popular research topic of granular computing. GBC includes
two stages: granular ball generation (GBG) and multi-granularity learning based
on the granular ball (GB). However, the stability and efficiency of existing
GBG methods need to be further improved due to their strong dependence on
-means or -division. In addition, GB-based classifiers only unilaterally
consider the GB's geometric characteristics to construct classification rules,
but the GB's quality is ignored. Therefore, in this paper, based on the
attention mechanism, a fast and stable GBG (GBG++) method is proposed first.
Specifically, the proposed GBG++ method only needs to calculate the distances
from the data-driven center to the undivided samples when splitting each GB
instead of randomly selecting the center and calculating the distances between
it and all samples. Moreover, an outlier detection method is introduced to
identify local outliers. Consequently, the GBG++ method can significantly
improve effectiveness, robustness, and efficiency while being absolutely
stable. Second, considering the influence of the sample size within the GB on
the GB's quality, based on the GBG++ method, an improved GB-based -nearest
neighbors algorithm (GBNN++) is presented, which can reduce
misclassification at the class boundary. Finally, the experimental results
indicate that the proposed method outperforms several existing GB-based
classifiers and classical machine learning classifiers on public benchmark
datasets
HIERARCHICAL-GRANULARITY HOLONIC MODELLING
This thesis aims to introduce an agent-based system engineering approach,
named Hierarchical-Granularity Holonic Modelling, to support intelligent
information processing at multiple granularity levels. The focus is especially
on complex hierarchical systems.
Nowadays, due to ever growing complexity of information systems and
processes, there is an increasing need of a simple self-modular computational
model able to manage data and perform information granulation at different
resolutions (i.e., both spatial and temporal). The current literature lacks to
provide such a methodology. To cite a relevant example, the object-oriented
paradigm is suitable for describing a system at a given representation level;
notwithstanding, further design effort is needed if a more synthetical of more
analytical view of the same system is required.
In the literature, the agent paradigm represents a viable solution in complex
systems modelling; in particular, Multi-Agent Systems have been applied with
success in a countless variety of distributed intelligence settings. Current
agent-oriented implementations however suffer from an apparent dichotomy
between agents as intelligent entities and agents\u2019 structures as superimposed
hierarchies of roles within a given organization. The agents\u2019 architectures are
often rigid and require intense re-engineering when the underpinning ontology
is updated to cast new design criteria.
The latest stage in the evolution of modelling frameworks is represented by
Holonic Systems, based on the notion of \u2018holon\u2019 and \u2018holarchy\u2019 (i.e.,
hierarchy of holons). A holon, just like an agent, is an intelligent entity able to
interact with the environment and to take decisions to solve a specific
problem. Contrarily to agent, holon has the noteworthy property of playing the
role of a whole and a part at the same time. This reflects at the organizational
level: holarchy functions first as autonomous wholes in supra-ordination to
their parts, secondly as dependent parts in sub-ordination to controls on higher
levels, and thirdly in coordination with their local environment.
These ideas were originally devised by Arthur Koestler in 1967. Since then,
Holonic Systems have gained more and more credit in various fields such as
Biology, Ecology, Theory of Emergence and Intelligent Manufacturing.
Notwithstanding, with respect to these disciplines, fewer works on Holonic
Systems can be found in the general framework of Artificial and
Computational Intelligence. Moreover, the distance between theoretic models
and actual implementation is still wide open.
In this thesis, starting from the Koestler\u2019s original idea, we devise a novel
agent-inspired model that merges intelligence with the holonic structure at
multiple hierarchical-granularity levels. This is made possible thanks to a rule-based
knowledge recursive representation, which allows the holonic agent to
carry out both operating and learning tasks in a hierarchy of granularity levels.
The proposed model can be directly used in terms of hardware/software
applications. This endows systems and software engineers with a modular and
scalable approach when dealing with complex hierarchical systems. In order
to support our claims, exemplar experiments of our proposal are shown and
prospective implications are commented
Relaxed Dissimilarity-based Symbolic Histogram Variants for Granular Graph Embedding
Graph embedding is an established and popular approach when designing graph-based pattern recognition systems. Amongst the several strategies, in the last ten years, Granular Computing emerged as a promising framework for structural pattern recognition. In the late 2000\u2019s, symbolic histograms have been proposed as the driving force in order to perform the graph embedding procedure by counting the number of times each granule of information appears in the graph to be embedded. Similarly to a bag-of-words representation of a text corpora, symbolic histograms have been originally conceived as integer-valued vectorial representation of the graphs. In this paper, we propose six \u2018relaxed\u2019 versions of symbolic histograms, where the proper dissimilarity values between the information granules and the constituent parts of the graph to be embedded are taken into account, information which is discarded in the original symbolic histogram formulation due to the hard-limited nature of the counting procedure. Experimental results on six open-access datasets of fully-labelled graphs show comparable performance in terms of classification accuracy with respect to the original symbolic histograms (average accuracy shift ranging from -7% to +2%), counterbalanced by a great improvement in terms of number of resulting information granules, hence number of features in the embedding space (up to 75% less features, on average)
A GIS-based multi-criteria evaluation framework for uncertainty reduction in earthquake disaster management using granular computing
One of the most important steps in earthquake disaster management is the prediction of probable damages which is called earthquake vulnerability assessment. Earthquake vulnerability assessment is a multicriteria problem and a number of multi-criteria decision making models have been proposed for the problem. Two main sources of uncertainty including uncertainty associated with experts‘ point of views and the one associated with attribute values exist in the earthquake vulnerability assessment problem. If the uncertainty in these two sources is not handled properly the resulted seismic vulnerability map will be unreliable. The main objective of this research is to propose a reliable model for earthquake vulnerability assessment which is able to manage the uncertainty associated with the experts‘ opinions. Granular Computing (GrC) is able to extract a set of if-then rules with minimum incompatibility from an information table. An integration of Dempster-Shafer Theory (DST) and GrC is applied in the current research to minimize the entropy in experts‘ opinions. The accuracy of the model based on the integration of the DST and GrC is 83%, while the accuracy of the single-expert model is 62% which indicates the importance of uncertainty management in seismic vulnerability assessment problem. Due to limited accessibility to current data, only six criteria are used in this model. However, the model is able to take into account both qualitative and quantitative criteria
- …