328 research outputs found

    A new weighting factor in combining belief function

    Get PDF
    Dempster-Shafer evidence theory has been widely used in various applications. However, to solve the problem of counter-intuitive outcomes by using classical Dempster-Shafer combination rule is still an open issue while fusing the conflicting evidences. Many approaches based on discounted evidence and weighted average evidence have been investigated and have made significant improvements. Nevertheless, all of these approaches have inherent flaws. In this paper, a new weighting factor is proposed to address this proble

    Improving landslide detection from airborne laser scanning data using optimized Dempster-Shafer

    Full text link
    © 2018 by the authors. A detailed and state-of-the-art landslide inventory map including precise landslide location is greatly required for landslide susceptibility, hazard, and risk assessments. Traditional techniques employed for landslide detection in tropical regions include field surveys, synthetic aperture radar techniques, and optical remote sensing. However, these techniques are time consuming and costly. Furthermore, complications arise for the generation of accurate landslide location maps in these regions due to dense vegetation in tropical forests. Given its ability to penetrate vegetation cover, high-resolution airborne light detection and ranging (LiDAR) is typically employed to generate accurate landslide maps. The object-based technique generally consists of many homogeneous pixels grouped together in a meaningful way through image segmentation. In this paper, in order to address the limitations of this approach, the final decision is executed using Dempster-Shafer theory (DST) rule combination based on probabilistic output from object-based support vector machine (SVM), random forest (RF), and K-nearest neighbor (KNN) classifiers. Therefore, this research proposes an efficient framework by combining three object-based classifiers using the DST method. Consequently, an existing supervised approach (i.e., fuzzy-based segmentation parameter optimizer) was adopted to optimize multiresolution segmentation parameters such as scale, shape, and compactness. Subsequently, a correlation-based feature selection (CFS) algorithm was employed to select the relevant features. Two study sites were selected to implement the method of landslide detection and evaluation of the proposed method (subset "A" for implementation and subset "B" for the transferrable). The DST method performed well in detecting landslide locations in tropical regions such as Malaysia, with potential applications in other similarly vegetated regions

    Multiple Density Maps Information Fusion for Effectively Assessing Intensity Pattern of Lifelogging Physical Activity

    Get PDF
    Physical activity (PA) measurement is a crucial task in healthcare technology aimed at monitoring the progression and treatment of many chronic diseases. Traditional lifelogging PA measures require relatively high cost and can only be conducted in controlled or semi-controlled environments, though they exhibit remarkable precision of PA monitoring outcomes. Recent advancement of commercial wearable devices and smartphones for recording one’s lifelogging PA has popularized data capture in uncontrolled environments. However, due to diverse life patterns and heterogeneity of connected devices as well as the PA recognition accuracy, lifelogging PA data measured by wearable devices and mobile phones contains much uncertainty thereby limiting their adoption for healthcare studies. To improve the feasibility of PA tracking datasets from commercial wearable/mobile devices, this paper proposes a lifelogging PA intensity pattern decision making approach for lifelong PA measures. The method is to firstly remove some irregular uncertainties (IU) via an Ellipse fitting model, and then construct a series of monthly based hour-day density map images for representing PA intensity patterns with regular uncertainties (RU) on each month. Finally it explores Dempster-Shafer theory of evidence fusing information from these density map images for generating a decision making model of a final personal lifelogging PA intensity pattern. The approach has significantly reduced the uncertainties and incompleteness of datasets from third party devices. Two case studies on a mobile personalized healthcare platform MHA [1] connecting the mobile app Moves are carried out. The results indicate that the proposed approach can improve effectiveness of PA tracking devices or apps for various types of people who frequently use them as a healthcare indicator

    Comparison of layer-stacking and Dempster-Shafer theory-based methods using Sentinel-1 and Sentinel-2 data fusion in urban land cover mapping

    Get PDF
    Data fusion has shown potential to improve the accuracy of land cover mapping, and selection of the optimal fusion technique remains a challenge. This study investigated the performance of fusing Sentinel-1 (S-1) and Sentinel-2 (S-2) data, using layer-stacking method at the pixel level and Dempster-Shafer (D-S) theory-based approach at the decision level, for mapping six land cover classes in Thu Dau Mot City, Vietnam. At the pixel level, S-1 and S-2 bands and their extracted textures and indices were stacked into the different single-sensor and multi-sensor datasets (i.e. fused datasets). The datasets were categorized into two groups. One group included the datasets containing only spectral and backscattering bands, and the other group included the datasets consisting of these bands and their extracted features. The random forest (RF) classifier was then applied to the datasets within each group. At the decision level, the RF classification outputs of the single-sensor datasets within each group were fused together based on D-S theory. Finally, the accuracy of the mapping results at both levels within each group was compared. The results showed that fusion at the decision level provided the best mapping accuracy compared to the results from other products within each group. The highest overall accuracy (OA) and Kappa coefficient of the map using D-S theory were 92.67% and 0.91, respectively. The decision-level fusion helped increase the OA of the map by 0.75% to 2.07% compared to that of corresponding S-2 products in the groups. Meanwhile, the data fusion at the pixel level delivered the mapping results, which yielded an OA of 4.88% to 6.58% lower than that of corresponding S-2 products in the groups

    Nonparametric regression analysis of uncertain and imprecise data using belief functions

    Get PDF
    AbstractThis paper introduces a new approach to regression analysis based on a fuzzy extension of belief function theory. For a given input vector x, the method provides a prediction regarding the value of the output variable y, in the form of a fuzzy belief assignment (FBA), defined as a collection of fuzzy sets of values with associated masses of belief. The output FBA is computed using a nonparametric, instance-based approach: training samples in the neighborhood of x are considered as sources of partial information on the response variable; the pieces of evidence are discounted as a function of their distance to x, and pooled using Dempster’s rule of combination. The method can cope with heterogeneous training data, including numbers, intervals, fuzzy numbers, and, more generally, fuzzy belief assignments, a convenient formalism for modeling unreliable and imprecise information provided by experts or multi-sensor systems. The performances of the method are compared to those of standard regression techniques using several simulated data sets

    An Evidential Fractal Analytic Hierarchy Process Target Recognition Method

    Get PDF
    Target recognition in uncertain environments is a hot issue, especially in extremely uncertain situation where both the target attribution and the sensor report are not clearly represented. To address this issue, a model which combines fractal theory, Dempster-Shafer evidence theory and analytic hierarchy process (AHP) to classify objects with incomplete information is proposed. The basic probability assignment (BPA), or belief function, can be modelled by conductivity function. The weight of each BPA is determined by AHP. Finally, the collected data are discounted with the weights. The feasibility and validness of proposed model is verified by an evidential classifier case in which sensory data are incomplete and collected from multiple level of granularity. The proposed fusion algorithm takes the advantage of not only efficient modelling of uncertain information, but also efficient combination of uncertain information

    Can we verify and intrinsically validate risk assessment results? What progress is being made to increase QRA trustworthiness?

    Get PDF
    PresentationThe purpose of a risk assessment is to make a decision whether the risk of a given situation is acceptable, and, if not, how we can reduce it to a tolerable level. For many cases, this can be done in a semi-quantitative fashion. For more complex or problematic cases a quantitative approach is required. Anybody who has been involved in such a study is aware of the difficulties and pitfalls. Despite proven software many choices of parameters must be made and many uncertainties remain. The thoroughness of the study can make quite a difference in the result. Independently, analysts can arrive at results that differ orders of magnitude, especially if uncertainties are not included. Because for important decisions on capital projects there are always proponents and opponents, there is often a tense situation in which conflict is looming. The paper will first briefly review a standard procedure introduced for safety cases on products that must provide more or less a guarantee that the risk of use is below a certain value. Next will be the various approaches how to deal with uncertainties in a quantitative risk assessment and the follow-on decision process. Over the last few years several new developments have been made to achieve, to a certain extent, a hold on so-called deep uncertainty. Expert elicitation and its limitations is another aspect. The paper will be concluded with some practical recommendations

    Belief Evolution Network-based Probability Transformation and Fusion

    Full text link
    Smets proposes the Pignistic Probability Transformation (PPT) as the decision layer in the Transferable Belief Model (TBM), which argues when there is no more information, we have to make a decision using a Probability Mass Function (PMF). In this paper, the Belief Evolution Network (BEN) and the full causality function are proposed by introducing causality in Hierarchical Hypothesis Space (HHS). Based on BEN, we interpret the PPT from an information fusion view and propose a new Probability Transformation (PT) method called Full Causality Probability Transformation (FCPT), which has better performance under Bi-Criteria evaluation. Besides, we heuristically propose a new probability fusion method based on FCPT. Compared with Dempster Rule of Combination (DRC), the proposed method has more reasonable result when fusing same evidence

    Informational Paradigm, management of uncertainty and theoretical formalisms in the clustering framework: A review

    Get PDF
    Fifty years have gone by since the publication of the first paper on clustering based on fuzzy sets theory. In 1965, L.A. Zadeh had published “Fuzzy Sets” [335]. After only one year, the first effects of this seminal paper began to emerge, with the pioneering paper on clustering by Bellman, Kalaba, Zadeh [33], in which they proposed a prototypal of clustering algorithm based on the fuzzy sets theory
    • …
    corecore