12 research outputs found

    Some applications of possibilistic mean value, variance, covariance and correlation

    Get PDF
    In 2001 we introduced the notions of possibilistic mean value and variance of fuzzy numbers. In this paper we list some works that use these notions. We shall mention some application areas as wel

    General set approximation and its logical applications *

    Get PDF
    Abstract To approximate sets a number of theories have appeared for the last decades. Starting up from some general theoretical pre-conditions the authors give a set of minimum requirements for the lower and upper approximations and define general partial approximation spaces. Then, these spaces are applied in logical investigations. The main question is what happens in the semantics of the first-order logic when the approximations of sets as semantic values of predicate parameters are used instead of sets as their total interpretations. On the basis of defined partial interpretations, logical laws relying on the defined general set-theoretical framework of set approximation are investigated

    Uncertainty Measures in Ordered Information System Based on Approximation Operators

    Get PDF
    This paper focuses on constructing uncertainty measures by the pure rough set approach in ordered information system. Four types of definitions of lower and upper approximations and corresponding uncertainty measurement concepts including accuracy, roughness, approximation quality, approximation accuracy, dependency degree, and importance degree are investigated. Theoretical analysis indicates that all the four types can be used to evaluate the uncertainty in ordered information system, especially that we find that the essence of the first type and the third type is the same. To interpret and help understand the approach, experiments about real-life data sets have been conducted to test the four types of uncertainty measures. From the results obtained, it can be shown that these uncertainty measures can surely measure the uncertainty in ordered information system

    Eliciting Domain Knowledge in Handwritten Digit Recognition

    Full text link
    Abstract. Pattern recognition methods for complex structured objects such as handwritten characters often have to deal with vast search spaces. Developed techniques, despite significant advancement in the last decade, still face some performance barriers. We believe that additional knowl-edge about the structure of patterns, elicited from humans perceptions, will help improve the recognition鈥檚 performance, especially when it comes to classify irregular, outlier cases. We propose a framework for the trans-fer of such knowledge from human experts and show how to incorporate it into the learning process of a recognition system using methods based on rough mereology. We also demonstrate how this knowledge acquisi-tion can be conducted in an interactive manner, with a large dataset of handwritten digits as an example.

    Variable precision rough set theory decision support system: With an application to bank rating prediction

    Get PDF
    This dissertation considers, the Variable Precision Rough Sets (VPRS) model, and its development within a comprehensive software package (decision support system), incorporating methods of re sampling and classifier aggregation. The concept of /-reduct aggregation is introduced, as a novel approach to classifier aggregation within the VPRS framework. The software is applied to the credit rating prediction problem, in particularly, a full exposition of the prediction and classification of Fitch's Individual Bank Strength Ratings (FIBRs), to a number of banks from around the world is presented. The ethos of the developed software was to rely heavily on a simple 'point and click' interface, designed to make a VPRS analysis accessible to an analyst, who is not necessarily an expert in the field of VPRS or decision rule based systems. The development of the software has also benefited from consultations with managers from one of Europe's leading hedge funds, who gave valuable insight, advice and recommendations on what they considered as pertinent issues with regards to data mining, and what they would like to see from a modern data mining system. The elements within the developed software reflect each stage of the knowledge discovery process, namely, pre-processing, feature selection, data mining, interpretation and evaluation. The developed software encompasses three software packages, a pre-processing package incorporating some of the latest pre-processing and feature selection methods a VPRS data mining package, based on a novel "vein graph" interface, which presents the analyst with selectable /-reducts over the domain of / and a third more advanced VPRS data mining package, which essentially automates the vein graph interface for incorporation into a re-sampling environment, and also implements the introduced aggregated /-reduct, developed to optimise and stabilise the predictive accuracy of a set of decision rules induced from the aggregated /-reduct

    Privacy-preserving document similarity detection

    Get PDF
    The document similarity detection is an important technique used in many applications. The existence of the tool that guarantees the privacy protection of the documents during the comparison will expand the area where this technique can be applied. The goal of this project is to develop a method for privacy-preserving document similarity detection capable to identify either semantically or syntactically similar documents. As the result two methods were designed, implemented, and evaluated. In the first method privacy-preserving data comparison protocol was applied for secure comparison. This original protocol was created as a part of this thesis. In the second method modified private-matching scheme was used. In both methods the Natural Language processing techniques were utilized to capture the semantic relations between documents. During the testing phase the first method was found to be too slow for the practical application. The second method, on the contrary, was rather fast and effective. It can be used for creation of the tool for detecting syntactical and semantic similarity in a privacy-preserving way
    corecore