3,790 research outputs found
To P or not to P: on the evidential nature of P-values and their place in scientific inference
The customary use of P-values in scientific research has been attacked as
being ill-conceived, and the utility of P-values has been derided. This paper
reviews common misconceptions about P-values and their alleged deficits as
indices of experimental evidence and, using an empirical exploration of the
properties of P-values, documents the intimate relationship between P-values
and likelihood functions. It is shown that P-values quantify experimental
evidence not by their numerical value, but through the likelihood functions
that they index. Many arguments against the utility of P-values are refuted and
the conclusion is drawn that P-values are useful indices of experimental
evidence. The widespread use of P-values in scientific research is well
justified by the actual properties of P-values, but those properties need to be
more widely understood.Comment: 31 pages, 9 figures and R cod
A method of classification for multisource data in remote sensing based on interval-valued probabilities
An axiomatic approach to intervalued (IV) probabilities is presented, where the IV probability is defined by a pair of set-theoretic functions which satisfy some pre-specified axioms. On the basis of this approach representation of statistical evidence and combination of multiple bodies of evidence are emphasized. Although IV probabilities provide an innovative means for the representation and combination of evidential information, they make the decision process rather complicated. It entails more intelligent strategies for making decisions. The development of decision rules over IV probabilities is discussed from the viewpoint of statistical pattern recognition. The proposed method, so called evidential reasoning method, is applied to the ground-cover classification of a multisource data set consisting of Multispectral Scanner (MSS) data, Synthetic Aperture Radar (SAR) data, and digital terrain data such as elevation, slope, and aspect. By treating the data sources separately, the method is able to capture both parametric and nonparametric information and to combine them. Then the method is applied to two separate cases of classifying multiband data obtained by a single sensor. In each case a set of multiple sources is obtained by dividing the dimensionally huge data into smaller and more manageable pieces based on the global statistical correlation information. By a divide-and-combine process, the method is able to utilize more features than the conventional maximum likelihood method
Evidential Label Propagation Algorithm for Graphs
Community detection has attracted considerable attention crossing many areas
as it can be used for discovering the structure and features of complex
networks. With the increasing size of social networks in real world, community
detection approaches should be fast and accurate. The Label Propagation
Algorithm (LPA) is known to be one of the near-linear solutions and benefits of
easy implementation, thus it forms a good basis for efficient community
detection methods. In this paper, we extend the update rule and propagation
criterion of LPA in the framework of belief functions. A new community
detection approach, called Evidential Label Propagation (ELP), is proposed as
an enhanced version of conventional LPA. The node influence is first defined to
guide the propagation process. The plausibility is used to determine the domain
label of each node. The update order of nodes is discussed to improve the
robustness of the method. ELP algorithm will converge after the domain labels
of all the nodes become unchanged. The mass assignments are calculated finally
as memberships of nodes. The overlapping nodes and outliers can be detected
simultaneously through the proposed method. The experimental results
demonstrate the effectiveness of ELP.Comment: 19th International Conference on Information Fusion, Jul 2016,
Heidelber, Franc
Modeling of Phenomena and Dynamic Logic of Phenomena
Modeling of complex phenomena such as the mind presents tremendous
computational complexity challenges. Modeling field theory (MFT) addresses
these challenges in a non-traditional way. The main idea behind MFT is to match
levels of uncertainty of the model (also, problem or theory) with levels of
uncertainty of the evaluation criterion used to identify that model. When a
model becomes more certain, then the evaluation criterion is adjusted
dynamically to match that change to the model. This process is called the
Dynamic Logic of Phenomena (DLP) for model construction and it mimics processes
of the mind and natural evolution. This paper provides a formal description of
DLP by specifying its syntax, semantics, and reasoning system. We also outline
links between DLP and other logical approaches. Computational complexity issues
that motivate this work are presented using an example of polynomial models
Land cover classification using fuzzy rules and aggregation of contextual information through evidence theory
Land cover classification using multispectral satellite image is a very
challenging task with numerous practical applications. We propose a multi-stage
classifier that involves fuzzy rule extraction from the training data and then
generation of a possibilistic label vector for each pixel using the fuzzy rule
base. To exploit the spatial correlation of land cover types we propose four
different information aggregation methods which use the possibilistic class
label of a pixel and those of its eight spatial neighbors for making the final
classification decision. Three of the aggregation methods use Dempster-Shafer
theory of evidence while the remaining one is modeled after the fuzzy k-NN
rule. The proposed methods are tested with two benchmark seven channel
satellite images and the results are found to be quite satisfactory. They are
also compared with a Markov random field (MRF) model-based contextual
classification method and found to perform consistently better.Comment: 14 pages, 2 figure
BPEC: Belief-Peaks Evidential Clustering
International audienceThis paper introduces a new evidential clustering method based on the notion of "belief peaks" in the framework of belief functions. The basic idea is that all data objects in the neighborhood of each sample provide pieces of evidence that induce belief on the possibility of such sample to become a cluster center. A sample having higher belief than its neighbors and located far away from other local maxima is then characterized as cluster center. Finally, a credal partition is created by minimizing an objective function with the fixed cluster centers. An adaptive distance metric is used to fit for unknown shapes of data structures. We show that the proposed evidential clustering procedure has very good performance with an ability to reveal the data structure in the form of a credal partition, from which hard, fuzzy, possibilistic and rough partitions can be derived. Simulations on synthetic and real-world datasets validate our conclusions
- …