674 research outputs found

    Extracting takagi-sugeno fuzzy rules with interpretable submodels via regularization of linguistic modifiers

    Get PDF
    In this paper, a method for constructing Takagi-Sugeno (TS) fuzzy system from data is proposed with the objective of preserving TS submodel comprehensibility, in which linguistic modifiers are suggested to characterize the fuzzy sets. A good property held by the proposed linguistic modifiers is that they can broaden the cores of fuzzy sets while contracting the overlaps of adjoining membership functions (MFs) during identification of fuzzy systems from data. As a result, the TS submodels identified tend to dominate the system behaviors by automatically matching the global model (GM) in corresponding subareas, which leads to good TS model interpretability while producing distinguishable input space partitioning. However, the GM accuracy and model interpretability are two conflicting modeling objectives, improving interpretability of fuzzy models generally degrades the GM performance of fuzzy models, and vice versa. Hence, one challenging problem is how to construct a TS fuzzy model with not only good global performance but also good submodel interpretability. In order to achieve a good tradeoff between GM performance and submodel interpretability, a regularization learning algorithm is presented in which the GM objective function is combined with a local model objective function defined in terms of an extended index of fuzziness of identified MFs. Moreover, a parsimonious rule base is obtained by adopting a QR decomposition method to select the important fuzzy rules and reduce the redundant ones. Experimental studies have shown that the TS models identified by the suggested method possess good submodel interpretability and satisfactory GM performance with parsimonious rule bases. © 2006 IEEE

    Automatic synthesis of fuzzy systems: An evolutionary overview with a genetic programming perspective

    Get PDF
    Studies in Evolutionary Fuzzy Systems (EFSs) began in the 90s and have experienced a fast development since then, with applications to areas such as pattern recognition, curve‐fitting and regression, forecasting and control. An EFS results from the combination of a Fuzzy Inference System (FIS) with an Evolutionary Algorithm (EA). This relationship can be established for multiple purposes: fine‐tuning of FIS's parameters, selection of fuzzy rules, learning a rule base or membership functions from scratch, and so forth. Each facet of this relationship creates a strand in the literature, as membership function fine‐tuning, fuzzy rule‐based learning, and so forth and the purpose here is to outline some of what has been done in each aspect. Special focus is given to Genetic Programming‐based EFSs by providing a taxonomy of the main architectures available, as well as by pointing out the gaps that still prevail in the literature. The concluding remarks address some further topics of current research and trends, such as interpretability analysis, multiobjective optimization, and synthesis of a FIS through Evolving methods

    A Linear General Type-2 Fuzzy Logic Based Computing With Words Approach for Realising an Ambient Intelligent Platform for Cooking Recipes Recommendation

    Get PDF
    This paper addresses the need to enhance transparency in ambient intelligent environments by developing more natural ways of interaction, which allow the users to communicate easily with the hidden networked devices rather than embedding obtrusive tablets and computing equipment throughout their surroundings. Ambient intelligence vision aims to realize digital environments that adapt to users in a responsive, transparent, and context-aware manner in order to enhance users' comfort. It is, therefore, appropriate to employ the paradigm of “computing with words” (CWWs), which aims to mimic the ability of humans to communicate transparently and manipulate perceptions via words. One of the daily activities that would increase the comfort levels of the users (especially people with disabilities) is cooking and performing tasks in the kitchen. Existing approaches on food preparation, cooking, and recipe recommendation stress on healthy eating and balanced meal choices while providing limited personalization features through the use of intrusive user interfaces. Herein, we present an application, which transparently interacts with users based on a novel CWWs approach in order to predict the recipe's difficulty level and to recommend an appropriate recipe depending on the user's mood, appetite, and spare time. The proposed CWWs framework is based on linear general type-2 (LGT2) fuzzy sets, which linearly quantify the linguistic modifiers in the third dimension in order to better represent the user perceptions while avoiding the drawbacks of type-1 and interval type-2 fuzzy sets. The LGT2-based CWWs framework can learn from user experiences and adapt to them in order to establish more natural human-machine interaction. We have carried numerous real-world experiments with various users in the University of Essex intelligent flat. The comparison analysis between interval type-2 fuzzy sets and LGT2 fuzzy sets demonstrates up to 55.43% improvement when general type-2 fuzzy sets are used than when interval type-2 fuzzy sets are used instead. The quantitative and qualitative analysis both show the success of the system in providing a natural interaction with the users for recommending food recipes where the quantitative analysis shows the high statistical correlation between the system output and the users' feedback; the qualitative analysis presents social scienc

    Constructing accurate and parsimonious fuzzy models with distinguishable fuzzy sets based on an entropy measure

    Get PDF
    Parsimony is very important in system modeling as it is closely related to model interpretability. In this paper, a scheme for constructing accurate and parsimonious fuzzy models by generating distinguishable fuzzy sets is proposed, in which the distinguishability of input space partitioning is measured by a so-called "local" entropy. By maximizing this entropy measure the optimal number of merged fuzzy sets with good distinguishability can be obtained, which leads to a parsimonious input space partitioning while preserving the information of the original fuzzy sets as much as possible. Different from the existing merging algorithms, the proposed scheme takes into account the information provided by input-output samples to optimize input space partitioning. Furthermore, this scheme possesses the ability to seek a balance between the global approximation ability and distinguishability of input space partitioning in constructing Takagi-Sugeno (TS) fuzzy models. Experimental results have shown that this scheme is able to produce accurate and parsimonious fuzzy models with distinguishable fuzzy sets. © 2005 Elsevier B.V. All rights reserved

    Datil: Learning Fuzzy Ontology Datatypes

    Get PDF
    International audienceReal-world applications using fuzzy ontologies are increasing in the last years, but the problem of fuzzy ontology learning has not received a lot of attention. While most of the previous approaches focus on the problem of learning fuzzy subclass axioms, we focus on learning fuzzy datatypes. In particular, we describe the Datil system, an implementation using unsupervised clustering algorithms to automatically obtain fuzzy datatypes from different input formats. We also illustrate the practical usefulness with an application: semantic lifestyle profiling

    Qualities, objects, sorts, and other treasures : gold digging in English and Arabic

    Get PDF
    In the present monograph, we will deal with questions of lexical typology in the nominal domain. By the term "lexical typology in the nominal domain", we refer to crosslinguistic regularities in the interaction between (a) those areas of the lexicon whose elements are capable of being used in the construction of "referring phrases" or "terms" and (b) the grammatical patterns in which these elements are involved. In the traditional analyses of a language such as English, such phrases are called "nominal phrases". In the study of the lexical aspects of the relevant domain, however, we will not confine ourselves to the investigation of "nouns" and "pronouns" but intend to take into consideration all those parts of speech which systematically alternate with nouns, either as heads or as modifiers of nominal phrases. In particular, this holds true for adjectives both in English and in other Standard European Languages. It is well known that adjectives are often difficult to distinguish from nouns, or that elements with an overt adjectival marker are used interchangeably with nouns, especially in particular semantic fields such as those denoting MATERIALS or NATlONALlTIES. That is, throughout this work the expression "lexical typology in the nominal domain" should not be interpreted as "a typology of nouns", but, rather, as the cross-linguistic investigation of lexical areas constitutive for "referring phrases" irrespective of how the parts-of-speech system in a specific language is defined

    Constructing 3D faces from natural language interface

    Get PDF
    This thesis presents a system by which 3D images of human faces can be constructed using a natural language interface. The driving force behind the project was the need to create a system whereby a machine could produce artistic images from verbal or composed descriptions. This research is the first to look at constructing and modifying facial image artwork using a natural language interface. Specialised modules have been developed to control geometry of 3D polygonal head models in a commercial modeller from natural language descriptions. These modules were produced from research on human physiognomy, 3D modelling techniques and tools, facial modelling and natural language processing. [Continues.

    Character Recognition

    Get PDF
    Character recognition is one of the pattern recognition technologies that are most widely used in practical applications. This book presents recent advances that are relevant to character recognition, from technical topics such as image processing, feature extraction or classification, to new applications including human-computer interfaces. The goal of this book is to provide a reference source for academic research and for professionals working in the character recognition field
    corecore