626 research outputs found

    Bell inequalities in cardinality-based similarity measurement

    Get PDF
    In this thesis a parametric family of cardinality-based similarity measures for ordinary sets (on a finite universe) harbouring numerous well-known similarity measures is introduced. The Lukasiewicz- and product-transitive members of this family are characterized. Their importance derives from the one-to-one correspondence with pseudo-metrics. Also a parametric family of cardinality-based inclusion measures for ordinary sets (on a finite universe) is introduced, and the Lukasiewicz- and product-transitivity properties are also studied. Fuzzification schemes based on a commutative quasi-copula are then used to transform these similarity and inclusion measures for ordinary sets into similarity and inclusion measures for fuzzy sets on a finite universe, rendering them applicable on graded feature set representations of objects. One of the main results of this thesis is that transitivity, and hence the corresponding dual metrical interpretation (for similarity measures only), is preserved along this fuzzification process. It is remarkable that one stumbles across the same inequalities that should be fulfilled when checking these transitivity properties. The inequalities are known as the Bell inequalities. All Bell-type inequalities regarding at most four random events of which not more than two are intersected at the same time are presented in this work and are reformulated in the context of fuzzy scalar cardinalities leading to related inequalities on commutative conjunctors. It is proven that some of these inequalities are fulfilled for commutative (quasi-)copulas and for the most important families of Archimedean t-norms and each of the inequalities, the parameter values such that the corresponding t-norms satisfy the inequality considered, are identified. Meta-theorems, stating general conditions ensuring that certain inequalities for cardinalities of ordinary sets are preserved under fuzzification, when adopting a scalar approach to fuzzy set cardinality, are presented. The conditions pertain to a commutative conjunctor used for modeling fuzzy set intersection. In particular, this conjunctor should fulfill a number of Bell-type inequalities. The advantage of these meta-theorems is that repetitious calculations (for example, when checking the transitivity properties of fuzzy similarity measures) can be avoided

    A kernel-based framework for learning graded relations from data

    Get PDF
    Driven by a large number of potential applications in areas like bioinformatics, information retrieval and social network analysis, the problem setting of inferring relations between pairs of data objects has recently been investigated quite intensively in the machine learning community. To this end, current approaches typically consider datasets containing crisp relations, so that standard classification methods can be adopted. However, relations between objects like similarities and preferences are often expressed in a graded manner in real-world applications. A general kernel-based framework for learning relations from data is introduced here. It extends existing approaches because both crisp and graded relations are considered, and it unifies existing approaches because different types of graded relations can be modeled, including symmetric and reciprocal relations. This framework establishes important links between recent developments in fuzzy set theory and machine learning. Its usefulness is demonstrated through various experiments on synthetic and real-world data.Comment: This work has been submitted to the IEEE for possible publication. Copyright may be transferred without notice, after which this version may no longer be accessibl

    Expression of uncertainty in fuzzy scales based measurements

    Get PDF
    International audienceFuzzy scales were introduced as a transition between weak scales and strong scales. Preceding studies on fuzzy scales considered only ideal exact measurement without any consideration of uncertainty. The goal of this paper is to present a general approach for the management of uncertainty within the context of fuzzy scale based measurements. After a short reminder on fuzzy scales, a method to define a probability density function or a possibility function on indications given by a fuzzy scale based measurement is exposed. Finally, a method based on the evidence theory is applied to build simultaneously a probability density function and an associated possibility function

    On the degree of transitivity of a fuzzy relation

    Get PDF
    Considering a family generated from a t-norm T, the degree of T-transitivity of a fuzzy relation R is revisited and proved to coincide with the greatest c for which R is -transitive. This fact gives rise to the study of new families of t-norms to generate different degrees of transitivity with respect to them. The mappings transforming fuzzy relations into transitive fuzzy relations smaller than or equal to the given ones are studied.Peer ReviewedPostprint (author's final draft

    On the Bayes-optimality of F-measure maximizers

    Get PDF
    The F-measure, which has originally been introduced in information retrieval, is nowadays routinely used as a performance metric for problems such as binary classification, multi-label classification, and structured output prediction. Optimizing this measure is a statistically and computationally challenging problem, since no closed-form solution exists. Adopting a decision-theoretic perspective, this article provides a formal and experimental analysis of different approaches for maximizing the F-measure. We start with a Bayes-risk analysis of related loss functions, such as Hamming loss and subset zero-one loss, showing that optimizing such losses as a surrogate of the F-measure leads to a high worst-case regret. Subsequently, we perform a similar type of analysis for F-measure maximizing algorithms, showing that such algorithms are approximate, while relying on additional assumptions regarding the statistical distribution of the binary response variables. Furthermore, we present a new algorithm which is not only computationally efficient but also Bayes-optimal, regardless of the underlying distribution. To this end, the algorithm requires only a quadratic (with respect to the number of binary responses) number of parameters of the joint distribution. We illustrate the practical performance of all analyzed methods by means of experiments with multi-label classification problems

    Characterization of complex networks: A survey of measurements

    Full text link
    Each complex network (or class of networks) presents specific topological features which characterize its connectivity and highly influence the dynamics of processes executed on the network. The analysis, discrimination, and synthesis of complex networks therefore rely on the use of measurements capable of expressing the most relevant topological features. This article presents a survey of such measurements. It includes general considerations about complex network characterization, a brief review of the principal models, and the presentation of the main existing measurements. Important related issues covered in this work comprise the representation of the evolution of complex networks in terms of trajectories in several measurement spaces, the analysis of the correlations between some of the most traditional measurements, perturbation analysis, as well as the use of multivariate statistics for feature selection and network classification. Depending on the network and the analysis task one has in mind, a specific set of features may be chosen. It is hoped that the present survey will help the proper application and interpretation of measurements.Comment: A working manuscript with 78 pages, 32 figures. Suggestions of measurements for inclusion are welcomed by the author
    • …
    corecore