46 research outputs found

    IIVFDT: Ignorance Functions based Interval-Valued Fuzzy Decision Tree with Genetic Tuning

    Get PDF
    The choice of membership functions plays an essential role in the success of fuzzy systems. This is a complex problem due to the possible lack of knowledge when assigning punctual values as membership degrees. To face this handicap, we propose a methodology called Ignorance functions based Interval-Valued Fuzzy Decision Tree with genetic tuning, IIVFDT for short, which allows to improve the performance of fuzzy decision trees by taking into account the ignorance degree. This ignorance degree is the result of a weak ignorance function applied to the punctual value set as membership degree. Our IIVFDT proposal is composed of four steps: (1) the base fuzzy decision tree is generated using the fuzzy ID3 algorithm; (2) the linguistic labels are modeled with Interval-Valued Fuzzy Sets. To do so, a new parametrized construction method of Interval-Valued Fuzzy Sets is defined, whose length represents such ignorance degree; (3) the fuzzy reasoning method is extended to work with this representation of the linguistic terms; (4) an evolutionary tuning step is applied for computing the optimal ignorance degree for each Interval-Valued Fuzzy Set. The experimental study shows that the IIVFDT method allows the results provided by the initial fuzzy ID3 with and without Interval-Valued Fuzzy Sets to be outperformed. The suitability of the proposed methodology is shown with respect to both several state-of-the-art fuzzy decision trees and C4.5. Furthermore, we analyze the quality of our approach versus two methods that learn the fuzzy decision tree using genetic algorithms. Finally, we show that a superior performance can be achieved by means of the positive synergy obtained when applying the well known genetic tuning of the lateral position after the application of the IIVFDT method.Spanish Government TIN2011-28488 TIN2010-1505

    Fuzzy logic:an introduction

    Get PDF

    Fuzzy Operator Trees for Modeling Utility Functions

    Get PDF
    In this thesis, we propose a method for modeling utility (rating) functions based on a novel concept called textbf{Fuzzy Operator Tree} (FOT for short). As the notion suggests, this method makes use of techniques from fuzzy set theory and implements a fuzzy rating function, that is, a utility function that maps to the unit interval, where 00 corresponds to the lowest and 11 to the highest evaluation. Even though the original motivation comes from quality control, FOTs are completely general and widely applicable. Our approach allows a human expert to specify a model in the form of an FOT in a quite convenient and intuitive way. To this end, he simply has to split evaluation criteria into sub-criteria in a recursive manner, and to determine in which way these sub-criteria ought to be combined: conjunctively, disjunctively, or by means of an averaging operator. The result of this process is the qualitative structure of the model. A second step, then, it is to parameterize the model. To support or even free the expert form this step, we develop a method for calibrating the model on the basis of exemplary ratings, that is, in a purely data-driven way. This method, which makes use of optimization techniques from the field of evolutionary algorithms, constitutes the second major contribution of the thesis. The third contribution of the thesis is a method for evaluating an FOT in a cost-efficient way. Roughly speaking, an FOT can be seen as an aggregation function that combines the evaluations of a number of basic criteria into an overall rating of an object. Essentially, the cost of computing this rating is hence given by sum of the evaluation costs of the basic criteria. In practice, however, the precise utility degree is often not needed. Instead, it is enough to know whether it lies above or below an important threshold value. In such cases, the evaluation process, understood as a sequential evaluation of basic criteria, can be stopped as soon as this question can be answered in a unique way. Of course, the (expected) number of basic criteria and, therefore, the (expected) evaluation cost will then strongly depend on the order of the evaluations, and this is what is optimized by the methods that we have developed

    The Hyperdimensional Transform for Distributional Modelling, Regression and Classification

    Full text link
    Hyperdimensional computing (HDC) is an increasingly popular computing paradigm with immense potential for future intelligent applications. Although the main ideas already took form in the 1990s, HDC recently gained significant attention, especially in the field of machine learning and data science. Next to efficiency, interoperability and explainability, HDC offers attractive properties for generalization as it can be seen as an attempt to combine connectionist ideas from neural networks with symbolic aspects. In recent work, we introduced the hyperdimensional transform, revealing deep theoretical foundations for representing functions and distributions as high-dimensional holographic vectors. Here, we present the power of the hyperdimensional transform to a broad data science audience. We use the hyperdimensional transform as a theoretical basis and provide insight into state-of-the-art HDC approaches for machine learning. We show how existing algorithms can be modified and how this transform can lead to a novel, well-founded toolbox. Next to the standard regression and classification tasks of machine learning, our discussion includes various aspects of statistical modelling, such as representation, learning and deconvolving distributions, sampling, Bayesian inference, and uncertainty estimation

    Generative-Discriminative Low Rank Decomposition for Medical Imaging Applications

    Get PDF
    In this thesis, we propose a method that can be used to extract biomarkers from medical images toward early diagnosis of abnormalities. Surge of demand for biomarkers and availability of medical images in the recent years call for accurate, repeatable, and interpretable approaches for extracting meaningful imaging features. However, extracting such information from medical images is a challenging task because the number of pixels (voxels) in a typical image is in order of millions while even a large sample-size in medical image dataset does not usually exceed a few hundred. Nevertheless, depending on the nature of an abnormality, only a parsimonious subset of voxels is typically relevant to the disease; therefore various notions of sparsity are exploited in this thesis to improve the generalization performance of the prediction task. We propose a novel discriminative dimensionality reduction method that yields good classification performance on various datasets without compromising the clinical interpretability of the results. This is achieved by combining the modelling strength of generative learning framework and the classification performance of discriminative learning paradigm. Clinical interpretability can be viewed as an additional measure of evaluation and is also helpful in designing methods that account for the clinical prior such as association of certain areas in a brain to a particular cognitive task or connectivity of some brain regions via neural fibres. We formulate our method as a large-scale optimization problem to solve a constrained matrix factorization. Finding an optimal solution of the large-scale matrix factorization renders off-the-shelf solver computationally prohibitive; therefore, we designed an efficient algorithm based on the proximal method to address the computational bottle-neck of the optimization problem. Our formulation is readily extended for different scenarios such as cases where a large cohort of subjects has uncertain or no class labels (semi-supervised learning) or a case where each subject has a battery of imaging channels (multi-channel), \etc. We show that by using various notions of sparsity as feasible sets of the optimization problem, we can encode different forms of prior knowledge ranging from brain parcellation to brain connectivity

    Fuzzy Sets, Fuzzy Logic and Their Applications 2020

    Get PDF
    The present book contains the 24 total articles accepted and published in the Special Issue “Fuzzy Sets, Fuzzy Logic and Their Applications, 2020” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of fuzzy sets and systems of fuzzy logic and their extensions/generalizations. These topics include, among others, elements from fuzzy graphs; fuzzy numbers; fuzzy equations; fuzzy linear spaces; intuitionistic fuzzy sets; soft sets; type-2 fuzzy sets, bipolar fuzzy sets, plithogenic sets, fuzzy decision making, fuzzy governance, fuzzy models in mathematics of finance, a philosophical treatise on the connection of the scientific reasoning with fuzzy logic, etc. It is hoped that the book will be interesting and useful for those working in the area of fuzzy sets, fuzzy systems and fuzzy logic, as well as for those with the proper mathematical background and willing to become familiar with recent advances in fuzzy mathematics, which has become prevalent in almost all sectors of the human life and activity

    FORECASTING CLIMATE AND LAND USE CHANGE IMPACTS ON ECOSYSTEM SERVICES IN HAWAIʻI THROUGH INTEGRATION OF HYDROLOGICAL AND PARTICIPATORY MODELS

    Get PDF
    Ph.D. Thesis. University of Hawaiʻi at Mānoa 2018

    Acta Polytechnica Hungarica 2019

    Get PDF

    Le recalage robuste d’images mĂ©dicales et la modĂ©lisation du mouvement basĂ©e sur l’apprentissage profond

    Get PDF
    This thesis presents new computational tools for quantifying deformations and motion of anatomical structures from medical images as required by a large variety of clinical applications. Generic deformable registration tools are presented that enable deformation analysis useful for improving diagnosis, prognosis and therapy guidance. These tools were built by combining state-of-the-art medical image analysis methods with cutting-edge machine learning methods.First, we focus on difficult inter-subject registration problems. By learning from given deformation examples, we propose a novel agent-based optimization scheme inspired by deep reinforcement learning where a statistical deformation model is explored in a trial-and-error fashion showing improved registration accuracy. Second, we develop a diffeomorphic deformation model that allows for accurate multiscale registration and deformation analysis by learning a low-dimensional representation of intra-subject deformations. The unsupervised method uses a latent variable model in form of a conditional variational autoencoder (CVAE) for learning a probabilistic deformation encoding that is useful for the simulation, classification and comparison of deformations.Third, we propose a probabilistic motion model derived from image sequences of moving organs. This generative model embeds motion in a structured latent space, the motion matrix, which enables the consistent tracking of structures and various analysis tasks. For instance, it leads to the simulation and interpolation of realistic motion patterns allowing for faster data acquisition and data augmentation.Finally, we demonstrate the importance of the developed tools in a clinical application where the motion model is used for disease prognosis and therapy planning. It is shown that the survival risk for heart failure patients can be predicted from the discriminative motion matrix with a higher accuracy compared to classical image-derived risk factors.Cette thĂšse prĂ©sente de nouveaux outils informatiques pour quantifier les dĂ©formations et le mouvement de structures anatomiques Ă  partir d’images mĂ©dicales dans le cadre d’une grande variĂ©tĂ© d’applications cliniques. Des outils gĂ©nĂ©riques de recalage dĂ©formable sont prĂ©sentĂ©s qui permettent l’analyse de la dĂ©formation de tissus anatomiques pour amĂ©liorer le diagnostic, le pronostic et la thĂ©rapie. Ces outils combinent des mĂ©thodes avancĂ©es d’analyse d’images mĂ©dicales avec des mĂ©thodes d’apprentissage automatique performantes.Dans un premier temps, nous nous concentrons sur les problĂšmes de recalages inter-sujets difficiles. En apprenant Ă  partir d’exemples de dĂ©formation donnĂ©s, nous proposons un nouveau schĂ©ma d’optimisation basĂ© sur un agent inspirĂ© de l’apprentissage par renforcement profond dans lequel un modĂšle de dĂ©formation statistique est explorĂ© de maniĂšre itĂ©rative montrant une prĂ©cision amĂ©liorĂ©e de recalage. Dans un second temps, nous dĂ©veloppons un modĂšle de dĂ©formation diffĂ©omorphe qui permet un recalage multi-Ă©chelle prĂ©cis et une analyse de dĂ©formation en apprenant une reprĂ©sentation de faible dimension des dĂ©formations intra-sujet. La mĂ©thode non supervisĂ©e utilise un modĂšle de variable latente sous la forme d’un autoencodeur variationnel conditionnel (CVAE) pour apprendre une reprĂ©sentation probabiliste des dĂ©formations qui est utile pour la simulation, la classification et la comparaison des dĂ©formations. TroisiĂšmement, nous proposons un modĂšle de mouvement probabiliste dĂ©rivĂ© de sĂ©quences d’images d’organes en mouvement. Ce modĂšle gĂ©nĂ©ratif dĂ©crit le mouvement dans un espace latent structurĂ©, la matrice de mouvement, qui permet le suivi cohĂ©rent des structures ainsi que l’analyse du mouvement. Ainsi cette approche permet la simulation et l’interpolation de modĂšles de mouvement rĂ©alistes conduisant Ă  une acquisition et une augmentation des donnĂ©es plus rapides.Enfin, nous dĂ©montrons l’intĂ©rĂȘt des outils dĂ©veloppĂ©s dans une application clinique oĂč le modĂšle de mouvement est utilisĂ© pour le pronostic de maladies et la planification de thĂ©rapies. Il est dĂ©montrĂ© que le risque de survie des patients souffrant d’insuffisance cardiaque peut ĂȘtre prĂ©dit Ă  partir de la matrice de mouvement discriminant avec une prĂ©cision supĂ©rieure par rapport aux facteurs de risque classiques dĂ©rivĂ©s de l’image
    corecore