52 research outputs found

    A New Framework With Similarity Reasoning and Monotone Fuzzy Rule Relabeling for Fuzzy Inference Systems

    Get PDF
    A complete and monotonically-ordered fuzzy rule base is necessary to maintain the monotonicity property of a Fuzzy Inference System (FIS). In this paper, a new monotone fuzzy rule relabeling technique to relabel a non-monotone fuzzy rule base provided by domain experts is proposed. Even though the Genetic Algorithm (GA)-based monotone fuzzy rule relabeling technique has been investigated in our previous work [7], the optimality of the approach could not be guaranteed. The new fuzzy rule relabeling technique adopts a simple brute force search, and it can produce an optimal result. We also formulate a new two-stage framework that encompasses a GA-based rule selection scheme, the optimization based-Similarity Reasoning (SR) scheme, and the proposed monotone fuzzy rule relabeling technique for preserving the monotonicity property of the FIS model. Applicability of the two-stage framework to a real world problem, i.e., failure mode and effect analysis, is further demonstrated. The results clearly demonstrate the usefulness of the proposed framework

    A new online updating framework for constructing monotonicity-preserving Fuzzy Inference Systems

    Get PDF
    In this paper, a new online updating framework for constructing monotonicity-preserving Fuzzy Inference Systems (FISs) is proposed. The framework encompasses an optimization-based Similarity Reasoning (SR) scheme and a new monotone fuzzy rule relabeling technique. A complete and monotonically-ordered fuzzy rule base is necessary to maintain the monotonicity property of an FIS model. The proposed framework attempts to allow a monotonicity-preserving FIS model to be constructed when the fuzzy rules are incomplete and not monotonically-ordered. An online feature is introduced to allow the FIS model to be updated from time to time. We further investigate three useful measures, i.e., the belief, plausibility, and evidential mass measures, which are inspired from the Dempster-Shafer theory of evidence, to analyze the proposed framework and to give an insight for the inferred outcomes from the FIS mode

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Use of aggregation functions in decision making

    Full text link
    A key component of many decision making processes is the aggregation step, whereby a set of numbers is summarised with a single representative value. This research showed that aggregation functions can provide a mathematical formalism to deal with issues like vagueness and uncertainty, which arise naturally in various decision contexts

    Annual Report 2013 : Faculty of Engineering

    Get PDF

    Label Ranking with Probabilistic Models

    Get PDF
    Diese Arbeit konzentriert sich auf eine spezielle Prognoseform, das sogenannte Label Ranking. Auf den Punkt gebracht, kann Label Ranking als eine Erweiterung des herkömmlichen Klassifizierungproblems betrachtet werden. Bei einer Anfrage (z. B. durch einen Kunden) und einem vordefinierten Set von Kandidaten Labels (zB AUDI, BMW, VW), wird ein einzelnes Label (zB BMW) zur Vorhersage in der Klassifizierung benötigt, während ein komplettes Ranking aller Label (zB BMW> VW> Audi) für das Label Ranking erforderlich ist. Da Vorhersagen dieser Art, bei vielen Problemen der realen Welt nützlich sind, können Label Ranking-Methoden in mehreren Anwendungen, darunter Information Retrieval, Kundenwunsch Lernen und E-Commerce eingesetzt werden. Die vorliegende Arbeit stellt eine Auswahl an Methoden für Label-Ranking vor, die Maschinelles Lernen mit statistischen Bewertungsmodellen kombiniert. Wir konzentrieren wir uns auf zwei statistische Ranking-Modelle, das Mallows- und das Plackett-Luce-Modell und zwei Techniken des maschinellen Lernens, das Beispielbasierte Lernen und das Verallgemeinernde Lineare Modell

    Monotone Models for Prediction in Data Mining.

    Get PDF
    This dissertation studies the incorporation of monotonicity constraints as a type of domain knowledge into a data mining process. Monotonicity constraints are enforced at two stages¿data preparation and data modeling. The main contributions of the research are a novel procedure to test the degree of monotonicity of a real data set, a greedy algorithm to transform non-monotone into monotone data, and extended and novel approaches for building monotone decision models. The results from simulation and real case studies show that enforcing monotonicity can considerably improve knowledge discovery and facilitate the decision-making process for end-users by deriving more accurate, stable and plausible decision models.
    corecore