165,753 research outputs found
Paradox Elimination in Dempster–Shafer Combination Rule with Novel Entropy Function: Application in Decision-Level Multi-Sensor Fusion
Multi-sensor data fusion technology in an important tool in building decision-making applications. Modified Dempster–Shafer (DS) evidence theory can handle conflicting sensor inputs and can be applied without any prior information. As a result, DS-based information fusion is very popular in decision-making applications, but original DS theory produces counterintuitive results when combining highly conflicting evidences from multiple sensors. An effective algorithm offering fusion of highly conflicting information in spatial domain is not widely reported in the literature. In this paper, a successful fusion algorithm is proposed which addresses these limitations of the original Dempster–Shafer (DS) framework. A novel entropy function is proposed based on Shannon entropy, which is better at capturing uncertainties compared to Shannon and Deng entropy. An 8-step algorithm has been developed which can eliminate the inherent paradoxes of classical DS theory. Multiple examples are presented to show that the proposed method is effective in handling conflicting information in spatial domain. Simulation results showed that the proposed algorithm has competitive convergence rate and accuracy compared to other methods presented in the literature
Generalized Evidence Theory
Conflict management is still an open issue in the application of Dempster
Shafer evidence theory. A lot of works have been presented to address this
issue. In this paper, a new theory, called as generalized evidence theory
(GET), is proposed. Compared with existing methods, GET assumes that the
general situation is in open world due to the uncertainty and incomplete
knowledge. The conflicting evidence is handled under the framework of GET. It
is shown that the new theory can explain and deal with the conflicting evidence
in a more reasonable way.Comment: 39 pages, 5 figure
A Distance-Based Decision in the Credal Level
Belief function theory provides a flexible way to combine information
provided by different sources. This combination is usually followed by a
decision making which can be handled by a range of decision rules. Some rules
help to choose the most likely hypothesis. Others allow that a decision is made
on a set of hypotheses. In [6], we proposed a decision rule based on a distance
measure. First, in this paper, we aim to demonstrate that our proposed decision
rule is a particular case of the rule proposed in [4]. Second, we give
experiments showing that our rule is able to decide on a set of hypotheses.
Some experiments are handled on a set of mass functions generated randomly,
others on real databases
Trolls Identification within an Uncertain Framework
The web plays an important role in people's social lives since the emergence
of Web 2.0. It facilitates the interaction between users, gives them the
possibility to freely interact, share and collaborate through social networks,
online communities forums, blogs, wikis and other online collaborative media.
However, an other side of the web is negatively taken such as posting
inflammatory messages. Thus, when dealing with the online communities forums,
the managers seek to always enhance the performance of such platforms. In fact,
to keep the serenity and prohibit the disturbance of the normal atmosphere,
managers always try to novice users against these malicious persons by posting
such message (DO NOT FEED TROLLS). But, this kind of warning is not enough to
reduce this phenomenon. In this context we propose a new approach for detecting
malicious people also called 'Trolls' in order to allow community managers to
take their ability to post online. To be more realistic, our proposal is
defined within an uncertain framework. Based on the assumption consisting on
the trolls' integration in the successful discussion threads, we try to detect
the presence of such malicious users. Indeed, this method is based on a
conflict measure of the belief function theory applied between the different
messages of the thread. In order to show the feasibility and the result of our
approach, we test it in different simulated data.Comment: International Conference on Tools with Artificial Intelligence -
ICTAI , Nov 2014, Limassol, Cypru
A reliability-based approach for influence maximization using the evidence theory
The influence maximization is the problem of finding a set of social network
users, called influencers, that can trigger a large cascade of propagation.
Influencers are very beneficial to make a marketing campaign goes viral through
social networks for example. In this paper, we propose an influence measure
that combines many influence indicators. Besides, we consider the reliability
of each influence indicator and we present a distance-based process that allows
to estimate the reliability of each indicator. The proposed measure is defined
under the framework of the theory of belief functions. Furthermore, the
reliability-based influence measure is used with an influence maximization
model to select a set of users that are able to maximize the influence in the
network. Finally, we present a set of experiments on a dataset collected from
Twitter. These experiments show the performance of the proposed solution in
detecting social influencers with good quality.Comment: 14 pages, 8 figures, DaWak 2017 conferenc
Median evidential c-means algorithm and its application to community detection
Median clustering is of great value for partitioning relational data. In this
paper, a new prototype-based clustering method, called Median Evidential
C-Means (MECM), which is an extension of median c-means and median fuzzy
c-means on the theoretical framework of belief functions is proposed. The
median variant relaxes the restriction of a metric space embedding for the
objects but constrains the prototypes to be in the original data set. Due to
these properties, MECM could be applied to graph clustering problems. A
community detection scheme for social networks based on MECM is investigated
and the obtained credal partitions of graphs, which are more refined than crisp
and fuzzy ones, enable us to have a better understanding of the graph
structures. An initial prototype-selection scheme based on evidential
semi-centrality is presented to avoid local premature convergence and an
evidential modularity function is defined to choose the optimal number of
communities. Finally, experiments in synthetic and real data sets illustrate
the performance of MECM and show its difference to other methods
- …