1,284 research outputs found
Median evidential c-means algorithm and its application to community detection
Median clustering is of great value for partitioning relational data. In this
paper, a new prototype-based clustering method, called Median Evidential
C-Means (MECM), which is an extension of median c-means and median fuzzy
c-means on the theoretical framework of belief functions is proposed. The
median variant relaxes the restriction of a metric space embedding for the
objects but constrains the prototypes to be in the original data set. Due to
these properties, MECM could be applied to graph clustering problems. A
community detection scheme for social networks based on MECM is investigated
and the obtained credal partitions of graphs, which are more refined than crisp
and fuzzy ones, enable us to have a better understanding of the graph
structures. An initial prototype-selection scheme based on evidential
semi-centrality is presented to avoid local premature convergence and an
evidential modularity function is defined to choose the optimal number of
communities. Finally, experiments in synthetic and real data sets illustrate
the performance of MECM and show its difference to other methods
Evidential relational clustering using medoids
In real clustering applications, proximity data, in which only pairwise
similarities or dissimilarities are known, is more general than object data, in
which each pattern is described explicitly by a list of attributes.
Medoid-based clustering algorithms, which assume the prototypes of classes are
objects, are of great value for partitioning relational data sets. In this
paper a new prototype-based clustering method, named Evidential C-Medoids
(ECMdd), which is an extension of Fuzzy C-Medoids (FCMdd) on the theoretical
framework of belief functions is proposed. In ECMdd, medoids are utilized as
the prototypes to represent the detected classes, including specific classes
and imprecise classes. Specific classes are for the data which are distinctly
far from the prototypes of other classes, while imprecise classes accept the
objects that may be close to the prototypes of more than one class. This soft
decision mechanism could make the clustering results more cautious and reduce
the misclassification rates. Experiments in synthetic and real data sets are
used to illustrate the performance of ECMdd. The results show that ECMdd could
capture well the uncertainty in the internal data structure. Moreover, it is
more robust to the initializations compared with FCMdd.Comment: in The 18th International Conference on Information Fusion, July
2015, Washington, DC, USA , Jul 2015, Washington, United State
Evidential Label Propagation Algorithm for Graphs
Community detection has attracted considerable attention crossing many areas
as it can be used for discovering the structure and features of complex
networks. With the increasing size of social networks in real world, community
detection approaches should be fast and accurate. The Label Propagation
Algorithm (LPA) is known to be one of the near-linear solutions and benefits of
easy implementation, thus it forms a good basis for efficient community
detection methods. In this paper, we extend the update rule and propagation
criterion of LPA in the framework of belief functions. A new community
detection approach, called Evidential Label Propagation (ELP), is proposed as
an enhanced version of conventional LPA. The node influence is first defined to
guide the propagation process. The plausibility is used to determine the domain
label of each node. The update order of nodes is discussed to improve the
robustness of the method. ELP algorithm will converge after the domain labels
of all the nodes become unchanged. The mass assignments are calculated finally
as memberships of nodes. The overlapping nodes and outliers can be detected
simultaneously through the proposed method. The experimental results
demonstrate the effectiveness of ELP.Comment: 19th International Conference on Information Fusion, Jul 2016,
Heidelber, Franc
A similarity-based community detection method with multiple prototype representation
Communities are of great importance for understanding graph structures in
social networks. Some existing community detection algorithms use a single
prototype to represent each group. In real applications, this may not
adequately model the different types of communities and hence limits the
clustering performance on social networks. To address this problem, a
Similarity-based Multi-Prototype (SMP) community detection approach is proposed
in this paper. In SMP, vertices in each community carry various weights to
describe their degree of representativeness. This mechanism enables each
community to be represented by more than one node. The centrality of nodes is
used to calculate prototype weights, while similarity is utilized to guide us
to partitioning the graph. Experimental results on computer generated and
real-world networks clearly show that SMP performs well for detecting
communities. Moreover, the method could provide richer information for the
inner structure of the detected communities with the help of prototype weights
compared with the existing community detection models
A reliability-based approach for influence maximization using the evidence theory
The influence maximization is the problem of finding a set of social network
users, called influencers, that can trigger a large cascade of propagation.
Influencers are very beneficial to make a marketing campaign goes viral through
social networks for example. In this paper, we propose an influence measure
that combines many influence indicators. Besides, we consider the reliability
of each influence indicator and we present a distance-based process that allows
to estimate the reliability of each indicator. The proposed measure is defined
under the framework of the theory of belief functions. Furthermore, the
reliability-based influence measure is used with an influence maximization
model to select a set of users that are able to maximize the influence in the
network. Finally, we present a set of experiments on a dataset collected from
Twitter. These experiments show the performance of the proposed solution in
detecting social influencers with good quality.Comment: 14 pages, 8 figures, DaWak 2017 conferenc
The belief noisy-or model applied to network reliability analysis
One difficulty faced in knowledge engineering for Bayesian Network (BN) is
the quan-tification step where the Conditional Probability Tables (CPTs) are
determined. The number of parameters included in CPTs increases exponentially
with the number of parent variables. The most common solution is the
application of the so-called canonical gates. The Noisy-OR (NOR) gate, which
takes advantage of the independence of causal interactions, provides a
logarithmic reduction of the number of parameters required to specify a CPT. In
this paper, an extension of NOR model based on the theory of belief functions,
named Belief Noisy-OR (BNOR), is proposed. BNOR is capable of dealing with both
aleatory and epistemic uncertainty of the network. Compared with NOR, more rich
information which is of great value for making decisions can be got when the
available knowledge is uncertain. Specially, when there is no epistemic
uncertainty, BNOR degrades into NOR. Additionally, different structures of BNOR
are presented in this paper in order to meet various needs of engineers. The
application of BNOR model on the reliability evaluation problem of networked
systems demonstrates its effectiveness
k-EVCLUS: Clustering Large Dissimilarity Data in the Belief Function Framework
International audienceIn evidential clustering, the membership of objects to clusters is considered to be uncertain and is represented by mass functions, forming a credal partition. The EVCLUS algorithm constructs a credal partition in such a way that larger dissimilarities between objects correspond to higher degrees of conflict between the associated mass functions. In this paper, we propose to replace the gradient-based optimization procedure in the original EVCLUS algorithm by a much faster iterative row-wise quadratic programming method. We also show that EVCLUS can be provided with only a random sample of the dissimilarities, reducing the time and space complexity from quadratic to linear. These improvements make EVCLUS suitable to cluster large dissimilarity datasets
Adaptive imputation of missing values for incomplete pattern classification
In classification of incomplete pattern, the missing values can either play a
crucial role in the class determination, or have only little influence (or
eventually none) on the classification results according to the context. We
propose a credal classification method for incomplete pattern with adaptive
imputation of missing values based on belief function theory. At first, we try
to classify the object (incomplete pattern) based only on the available
attribute values. As underlying principle, we assume that the missing
information is not crucial for the classification if a specific class for the
object can be found using only the available information. In this case, the
object is committed to this particular class. However, if the object cannot be
classified without ambiguity, it means that the missing values play a main role
for achieving an accurate classification. In this case, the missing values will
be imputed based on the K-nearest neighbor (K-NN) and self-organizing map (SOM)
techniques, and the edited pattern with the imputation is then classified. The
(original or edited) pattern is respectively classified according to each
training class, and the classification results represented by basic belief
assignments are fused with proper combination rules for making the credal
classification. The object is allowed to belong with different masses of belief
to the specific classes and meta-classes (which are particular disjunctions of
several single classes). The credal classification captures well the
uncertainty and imprecision of classification, and reduces effectively the rate
of misclassifications thanks to the introduction of meta-classes. The
effectiveness of the proposed method with respect to other classical methods is
demonstrated based on several experiments using artificial and real data sets
Evidential Clustering: A Review
International audienceIn evidential clustering, uncertainty about the assignment of objects to clusters is represented by Dempster-Shafer mass functions. The resulting clustering structure, called a credal partition, is shown to be more general than hard, fuzzy, possibilistic and rough partitions, which are recovered as special cases. Three algorithms to generate a credal partition are reviewed. Each of these algorithms is shown to implement a decision-directed clustering strategy. Their relative merits are discussed
A systematic review of data quality issues in knowledge discovery tasks
Hay un gran crecimiento en el volumen de datos porque las organizaciones capturan permanentemente la cantidad colectiva de datos para lograr un mejor proceso de toma de decisiones. El desafĂo mas fundamental es la exploraciĂłn de los grandes volĂşmenes de datos y la extracciĂłn de conocimiento Ăştil para futuras acciones por medio de tareas para el descubrimiento del conocimiento; sin embargo, muchos datos presentan mala calidad. Presentamos una revisiĂłn sistemática de los asuntos de calidad de datos en las áreas del descubrimiento de conocimiento y un estudio de caso aplicado a la enfermedad agrĂcola conocida como la roya del cafĂ©.Large volume of data is growing because the organizations are continuously capturing the collective amount of data for better decision-making process. The most fundamental challenge is to explore the large volumes of data and extract useful knowledge for future actions through knowledge discovery tasks, nevertheless many data has poor quality. We presented a systematic review of the data quality issues in knowledge discovery tasks and a case study applied to agricultural disease named coffee rust
- …