395 research outputs found

    Dynamic Fuzzy Rule Interpolation

    Get PDF

    Improving the geospatial consistency of digital libraries metadata

    Get PDF
    Consistency is an essential aspect of the quality of metadata. Inconsistent metadata records are harmful: given a themed query, the set of retrieved metadata records would contain descriptions of unrelated or irrelevant resources, and may even not contain some resources considered obvious. This is even worse when the description of the location is inconsistent. Inconsistent spatial descriptions may yield invisible or hidden geographical resources that cannot be retrieved by means of spatially themed queries. Therefore, ensuring spatial consistency should be a primary goal when reusing, sharing and developing georeferenced digital collections. We present a methodology able to detect geospatial inconsistencies in metadata collections based on the combination of spatial ranking, reverse geocoding, geographic knowledge organization systems and information-retrieval techniques. This methodology has been applied to a collection of metadata records describing maps and atlases belonging to the Library of Congress. The proposed approach was able to automatically identify inconsistent metadata records (870 out of 10,575) and propose fixes to most of them (91.5%) These results support the ability of the proposed methodology to assess the impact of spatial inconsistency in the retrievability and visibility of metadata records and improve their spatial consistency

    A New Similarity Measure between Intuitionistic Fuzzy Sets and Its Application to Pattern Recognition

    Get PDF
    As a generation of ordinary fuzzy set, the concept of intuitionistic fuzzy set (IFS), characterized both by a membership degree and by a nonmembership degree, is a more flexible way to cope with the uncertainty. Similarity measures of intuitionistic fuzzy sets are used to indicate the similarity degree between intuitionistic fuzzy sets. Although many similarity measures for intuitionistic fuzzy sets have been proposed in previous studies, some of those cannot satisfy the axioms of similarity or provide counterintuitive cases. In this paper, a new similarity measure and weighted similarity measure between IFSs are proposed. It proves that the proposed similarity measures satisfy the properties of the axiomatic definition for similarity measures. Comparison between the previous similarity measures and the proposed similarity measure indicates that the proposed similarity measure does not provide any counterintuitive cases. Moreover, it is demonstrated that the proposed similarity measure is capable of discriminating difference between patterns

    Linguistic probability theory

    Get PDF
    In recent years probabilistic knowledge-based systems such as Bayesian networks and influence diagrams have come to the fore as a means of representing and reasoning about complex real-world situations. Although some of the probabilities used in these models may be obtained statistically, where this is impossible or simply inconvenient, modellers rely on expert knowledge. Experts, however, typically find it difficult to specify exact probabilities and conventional representations cannot reflect any uncertainty they may have. In this way, the use of conventional point probabilities can damage the accuracy, robustness and interpretability of acquired models. With these concerns in mind, psychometric researchers have demonstrated that fuzzy numbers are good candidates for representing the inherent vagueness of probability estimates, and the fuzzy community has responded with two distinct theories of fuzzy probabilities.This thesis, however, identifies formal and presentational problems with these theories which render them unable to represent even very simple scenarios. This analysis leads to the development of a novel and intuitively appealing alternative - a theory of linguistic probabilities patterned after the standard Kolmogorov axioms of probability theory. Since fuzzy numbers lack algebraic inverses, the resulting theory is weaker than, but generalises its classical counterpart. Nevertheless, it is demonstrated that analogues for classical probabilistic concepts such as conditional probability and random variables can be constructed. In the classical theory, representation theorems mean that most of the time the distinction between mass/density distributions and probability measures can be ignored. Similar results are proven for linguistic probabiliities.From these results it is shown that directed acyclic graphs annotated with linguistic probabilities (under certain identified conditions) represent systems of linguistic random variables. It is then demonstrated these linguistic Bayesian networks can utilise adapted best-of-breed Bayesian network algorithms (junction tree based inference and Bayes' ball irrelevancy calculation). These algorithms are implemented in ARBOR, an interactive design, editing and querying tool for linguistic Bayesian networks.To explore the applications of these techniques, a realistic example drawn from the domain of forensic statistics is developed. In this domain the knowledge engineering problems cited above are especially pronounced and expert estimates are commonplace. Moreover, robust conclusions are of unusually critical importance. An analysis of the resulting linguistic Bayesian network for assessing evidential support in glass-transfer scenarios highlights the potential utility of the approach

    Analytic Case Study Using Unsupervised Event Detection in Multivariate Time Series Data

    Get PDF
    Analysis of cyber-physical systems (CPS) has emerged as a critical domain for providing US Air Force and Space Force leadership decision advantage in air, space, and cyberspace. Legacy methods have been outpaced by evolving battlespaces and global peer-level challengers. Automation provides one way to decrease the time that analysis currently takes. This thesis presents an event detection automation system (EDAS) which utilizes deep learning models, distance metrics, and static thresholding to detect events. The EDAS automation is evaluated with case study of CPS domain experts in two parts. Part 1 uses the current methods for CPS analysis with a qualitative pre-survey and tasks participants, in their natural setting to annotate events. Part 2 asks participants to perform annotation with the assistance of EDAS’s pre-annotations. Results from Part 1 and Part 2 exhibit low inter-coder agreement for both human-derived and automation-assisted event annotations. Qualitative analysis of survey results showed low trust and confidence in the event detection automation. One correlation or interpretation to the low confidence is that the low inter-coder agreement means that the humans do not share the same idea of what an annotation product should be
    • …
    corecore