3,972 research outputs found
Fuzzy entropy from weak fuzzy subsethood measures
In this paper, we propose a new construction method for fuzzy and weak fuzzy subsethood measures based on the aggregation of implication operators. We study the desired properties of the implication operators in order to construct these measures. We also show the relationship between fuzzy entropy and weak fuzzy subsethood measures constructed by our method
Construction of interval-valued fuzzy preference relations from ignorance functions and fuzzy preference relations. Application to decision making
The file attached is this record is the authors pre-print. The publishers version of record can be found by following the DOI link
Pre-aggregation functions: construction and an application
In this work we introduce the notion of preaggregation
function. Such a function satisfies the same boundary
conditions as an aggregation function, but, instead of requiring
monotonicity, only monotonicity along some fixed direction (directional
monotonicity) is required. We present some examples
of such functions. We propose three different methods to build
pre-aggregation functions. We experimentally show that in fuzzy
rule-based classification systems, when we use one of these
methods, namely, the one based on the use of the Choquet
integral replacing the product by other aggregation functions,
if we consider the minimum or the Hamacher product t-norms
for such construction, we improve the results obtained when
applying the fuzzy reasoning methods obtained using two classical
averaging operators like the maximum and the Choquet integral.This work was supported in part by the Spanish Ministry of Science
and Technology under projects TIN2008-06681-C06-01, TIN2010-
15055, TIN2013-40765-P, TIN2011-29520
T-Norms Driven Loss Functions for Machine Learning
Neural-symbolic approaches have recently gained popularity to inject prior
knowledge into a learner without requiring it to induce this knowledge from
data. These approaches can potentially learn competitive solutions with a
significant reduction of the amount of supervised data. A large class of
neural-symbolic approaches is based on First-Order Logic to represent prior
knowledge, relaxed to a differentiable form using fuzzy logic. This paper shows
that the loss function expressing these neural-symbolic learning tasks can be
unambiguously determined given the selection of a t-norm generator. When
restricted to supervised learning, the presented theoretical apparatus provides
a clean justification to the popular cross-entropy loss, which has been shown
to provide faster convergence and to reduce the vanishing gradient problem in
very deep structures. However, the proposed learning formulation extends the
advantages of the cross-entropy loss to the general knowledge that can be
represented by a neural-symbolic method. Therefore, the methodology allows the
development of a novel class of loss functions, which are shown in the
experimental results to lead to faster convergence rates than the approaches
previously proposed in the literature
Fuzzy regions: adding subregions and the impact on surface and distance calculation
In the concept of fuzzy regions we introduced before, a region was considered to be a fuzzy set of points, each having its own membership grade. While this allows the modelling of regions in which points only partly belong to the region, it has the downside that all the points are considered independently, which is too loose a restriction for some situations. The model is not able to support the fact that some points may be linked together. In this contribution, we propose an extension to the model, so that points can be made related to one another. It will permit the user to, for instance, specify points or even (sub)regions within the fuzzy region that are linked together: they all belong to the region to the same extent at the same time. By letting the user specify such subregions, the accuracy Of the model can be increased: the model can match the real situation better; while at the same time decreasing the fuzziness: if points are known to be related, there is no need to consider them independently. As an example, the use of such a fuzzy region to represent a lake with a variable water level can be considered: as the water level rises, a set of points will become flooded; it is interesting to represent this set of points as a. subset of the region, as these points are somewhat related (the same can be done for different water levels). The impact of this extension to the model on both surface area calculation an distance measurement are considered, and new appropriate definitions are introduced
- …