42 research outputs found

    Time-Sliced temporal evidential networks : the case of evidential HMM with application to dynamical system analysis.

    No full text
    International audienceDiagnostics and prognostics of health states are important activities in the maintenance process strategy of dynamical systems. Many approaches have been developed for this purpose and we particularly focus on data-driven methods which are increasingly applied due to the availability of various cheap sensors. Most data-driven methods proposed in the literature rely on probability density estimation. However, when the training data are limited, the estimated parameters are no longer reliable. This is particularly true for data in faulty states which are generally expensive and difficult to obtain. In order to solve this problem, we propose to use the theory of belief functions as described by Dempster, Shafer (Theory of Evidence) and Smets (Transferable Belief Model). A few methods based on belief functions have been proposed for diagnostics and prognostics of dynamical systems. Among these methods, Evidential Hidden Markov Models (EvHMM) seems promising and extends usual HMM to belief functions. Inference tools in EvHMM have already been developed, but parameter training has not fully been considered until now or only with strong assumptions. In this paper, we propose to complete the generalization of HMM to belief functions with a method for automatic parameter training. The generalization of this training procedure to more general Time-Sliced Temporal Evidential Network (TSTEN) is discussed paving the way for a further generalization of Dynamic Bayesian Network to belief functions with potential applications to diagnostics and prognostics. An application to time series classification is proposed

    Evidential Evolving Gustafson-Kessel Algorithm (E2GK) and its application to PRONOSTIA's Data Streams Partitioning.

    No full text
    International audienceCondition-based maintenance (CBM) appears to be a key element in modern maintenance practice. Research in diagnosis and prognosis, two important aspects of a CBM program, is growing rapidly and many studies are conducted in research laboratories to develop models, algorithms and technologies for data processing. In this context, we present a new evolving clustering algorithm developed for prognostics perspectives. E2GK (Evidential Evolving Gustafson-Kessel) is an online clustering method in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and efficient algorithms: Evidantial c-Means (ECM) and Evolving Gustafson-Kessel (EGK). To validate and illustrate the results of E2GK, we use a dataset provided by an original platform called PRONOSTIA dedicated to prognostics applications

    Evidential Evolving Gustafson-Kessel Algortithm (E2GK) and its application to PRONOSTIA's Data Streams Partitioning.

    No full text
    International audienceCondition-based maintenance (CBM) appears to be a key element in modern maintenance practice. Research in diagnosis and prognosis, two important aspects of a CBM program, is growing rapidly and many studies are conducted in research laboratories to develop models, algorithms and technologies for data processing. In this context, we present a new evolving clustering algorithm developed for prognostics perspectives. E2GK (Evidential Evolving Gustafson-Kessel) is an online clustering method in the theoretical framework of belief functions. The algorithm enables an online partitioning of data streams based on two existing and efficient algorithms: Evidantial c-Means (ECM) and Evolving Gustafson-Kessel (EGK). To validate and illustrate the results of E2GK, we use a dataset provided by an original platform called PRONOSTIA dedicated to prognostics applications

    Data reliability assessment in a data warehouse opened on the Web

    Get PDF
    International audienceThis paper presents an ontology-driven workflow that feeds and queries a data warehouse opened on the Web. Data are extracted from data tables in Web documents. As web documents are very heterogeneous in nature, a key issue in this workflow is the ability to assess the reliability of retrieved data. We first recall the main steps of our method to annotate and query Web data tables driven by a domain ontology. Then we propose an original method to assess Web data table reliability from a set of criteria by the means of evidence theory. Finally, we show how we extend the workflow to integrate the reliability assessment step

    Contradiction measures and specificity degrees of basic belief assignments

    Get PDF
    In the theory of belief functions, many measures of uncertainty have been introduced. However, it is not always easy to understand what these measures really try to represent. In this paper, we re-interpret some measures of uncertainty in the theory of belief functions. We present some interests and drawbacks of the existing measures. On these observations, we introduce a measure of contradiction. Therefore, we present some degrees of non-specificity and Bayesianity of a mass. We propose a degree of specificity based on the distance between a mass and its most specific associated mass. We also show how to use the degree of specificity to measure the specificity of a fusion rule. Illustrations on simple examples are given

    A Hierarchical Flexible Coarsening Method to Combine BBAs in Probabilities

    Get PDF
    In many applications involving epistemic uncertainties usually modeled by belief functions, it is often necessary to approximate general (non-Bayesian) basic belief assignments (BBAs) to subjective probabilities (called Bayesian BBAs). This necessity occurs if one needs to embed the fusion result in a system based on the probabilistic framework and Bayesian inference (e.g. tracking systems), or if one wants to use classical decision theory to make a decision. There exists already several methods (probabilistic transforms) to approximate any general BBA to a Bayesian BBA

    Rough Set Classifier Based on DSmT

    Get PDF
    International audienceThe classifier based on rough sets is widely used in pattern recognition. However, in the implementation of rough set-based classifiers, there always exist the problems of uncertainty. Generally, information decision table in Rough Set Theory (RST) always contains many attributes, and the classification performance of each attribute is different. It is necessary to determine which attribute needs to be used according to the specific problem. In RST, such problem is regarded as attribute reduction problems which aims to select proper candidates. Therefore, the uncertainty problem occurs for the classification caused by the choice of attributes. In addition, the voting strategy is usually adopted to determine the category of target concept in the final decision making. However, some classes of targets cannot be determined when multiple categories cannot be easily distinguished (for example, the number of votes of different classes is the same). Thus, the uncertainty occurs for the classification caused by the choice of classes. In this paper, we use the theory of belief functions to solve two above mentioned uncertainties in rough set classification and rough set classifier based on Dezert-Smarandache Theory (DSmT) is proposed. It can be experimentally verified that our proposed approach can deal efficiently with the uncertainty in rough set classifiers

    Flow-based reputation with uncertainty: Evidence-Based Subjective Logic

    Full text link
    The concept of reputation is widely used as a measure of trustworthiness based on ratings from members in a community. The adoption of reputation systems, however, relies on their ability to capture the actual trustworthiness of a target. Several reputation models for aggregating trust information have been proposed in the literature. The choice of model has an impact on the reliability of the aggregated trust information as well as on the procedure used to compute reputations. Two prominent models are flow-based reputation (e.g., EigenTrust, PageRank) and Subjective Logic based reputation. Flow-based models provide an automated method to aggregate trust information, but they are not able to express the level of uncertainty in the information. In contrast, Subjective Logic extends probabilistic models with an explicit notion of uncertainty, but the calculation of reputation depends on the structure of the trust network and often requires information to be discarded. These are severe drawbacks. In this work, we observe that the `opinion discounting' operation in Subjective Logic has a number of basic problems. We resolve these problems by providing a new discounting operator that describes the flow of evidence from one party to another. The adoption of our discounting rule results in a consistent Subjective Logic algebra that is entirely based on the handling of evidence. We show that the new algebra enables the construction of an automated reputation assessment procedure for arbitrary trust networks, where the calculation no longer depends on the structure of the network, and does not need to throw away any information. Thus, we obtain the best of both worlds: flow-based reputation and consistent handling of uncertainties
    corecore