205 research outputs found

    Data Science: Measuring Uncertainties

    Get PDF
    With the increase in data processing and storage capacity, a large amount of data is available. Data without analysis does not have much value. Thus, the demand for data analysis is increasing daily, and the consequence is the appearance of a large number of jobs and published articles. Data science has emerged as a multidisciplinary field to support data-driven activities, integrating and developing ideas, methods, and processes to extract information from data. This includes methods built from different knowledge areas: Statistics, Computer Science, Mathematics, Physics, Information Science, and Engineering. This mixture of areas has given rise to what we call Data Science. New solutions to the new problems are reproducing rapidly to generate large volumes of data. Current and future challenges require greater care in creating new solutions that satisfy the rationality for each type of problem. Labels such as Big Data, Data Science, Machine Learning, Statistical Learning, and Artificial Intelligence are demanding more sophistication in the foundations and how they are being applied. This point highlights the importance of building the foundations of Data Science. This book is dedicated to solutions and discussions of measuring uncertainties in data analysis problems

    Approximation Theory and Related Applications

    Get PDF
    In recent years, we have seen a growing interest in various aspects of approximation theory. This happened due to the increasing complexity of mathematical models that require computer calculations and the development of the theoretical foundations of the approximation theory. Approximation theory has broad and important applications in many areas of mathematics, including functional analysis, differential equations, dynamical systems theory, mathematical physics, control theory, probability theory and mathematical statistics, and others. Approximation theory is also of great practical importance, as approximate methods and estimation of approximation errors are used in physics, economics, chemistry, signal theory, neural networks and many other areas. This book presents the works published in the Special Issue "Approximation Theory and Related Applications". The research of the world’s leading scientists presented in this book reflect new trends in approximation theory and related topics

    Intuitionistic Fuzzy Broad Learning System: Enhancing Robustness Against Noise and Outliers

    Full text link
    In the realm of data classification, broad learning system (BLS) has proven to be a potent tool that utilizes a layer-by-layer feed-forward neural network. It consists of feature learning and enhancement segments, working together to extract intricate features from input data. The traditional BLS treats all samples as equally significant, which makes it less robust and less effective for real-world datasets with noises and outliers. To address this issue, we propose the fuzzy BLS (F-BLS) model, which assigns a fuzzy membership value to each training point to reduce the influence of noises and outliers. In assigning the membership value, the F-BLS model solely considers the distance from samples to the class center in the original feature space without incorporating the extent of non-belongingness to a class. We further propose a novel BLS based on intuitionistic fuzzy theory (IF-BLS). The proposed IF-BLS utilizes intuitionistic fuzzy numbers based on fuzzy membership and non-membership values to assign scores to training points in the high-dimensional feature space by using a kernel function. We evaluate the performance of proposed F-BLS and IF-BLS models on 44 UCI benchmark datasets across diverse domains. Furthermore, Gaussian noise is added to some UCI datasets to assess the robustness of the proposed F-BLS and IF-BLS models. Experimental results demonstrate superior generalization performance of the proposed F-BLS and IF-BLS models compared to baseline models, both with and without Gaussian noise. Additionally, we implement the proposed F-BLS and IF-BLS models on the Alzheimers Disease Neuroimaging Initiative (ADNI) dataset, and promising results showcase the models effectiveness in real-world applications. The proposed methods offer a promising solution to enhance the BLS frameworks ability to handle noise and outliers

    IIVFDT: Ignorance Functions based Interval-Valued Fuzzy Decision Tree with Genetic Tuning

    Get PDF
    The choice of membership functions plays an essential role in the success of fuzzy systems. This is a complex problem due to the possible lack of knowledge when assigning punctual values as membership degrees. To face this handicap, we propose a methodology called Ignorance functions based Interval-Valued Fuzzy Decision Tree with genetic tuning, IIVFDT for short, which allows to improve the performance of fuzzy decision trees by taking into account the ignorance degree. This ignorance degree is the result of a weak ignorance function applied to the punctual value set as membership degree. Our IIVFDT proposal is composed of four steps: (1) the base fuzzy decision tree is generated using the fuzzy ID3 algorithm; (2) the linguistic labels are modeled with Interval-Valued Fuzzy Sets. To do so, a new parametrized construction method of Interval-Valued Fuzzy Sets is defined, whose length represents such ignorance degree; (3) the fuzzy reasoning method is extended to work with this representation of the linguistic terms; (4) an evolutionary tuning step is applied for computing the optimal ignorance degree for each Interval-Valued Fuzzy Set. The experimental study shows that the IIVFDT method allows the results provided by the initial fuzzy ID3 with and without Interval-Valued Fuzzy Sets to be outperformed. The suitability of the proposed methodology is shown with respect to both several state-of-the-art fuzzy decision trees and C4.5. Furthermore, we analyze the quality of our approach versus two methods that learn the fuzzy decision tree using genetic algorithms. Finally, we show that a superior performance can be achieved by means of the positive synergy obtained when applying the well known genetic tuning of the lateral position after the application of the IIVFDT method.Spanish Government TIN2011-28488 TIN2010-1505

    An analysis of consensus approaches based on different concepts of coincidence

    Get PDF
    The file attached to this record is the author's final peer reviewed version.Soft consensus is a relevant topic in group decision making problems. Soft consensus measures are utilized to reflect the different agreement degrees between the experts leading the consensus reaching process. This may determine the final decision and the time needed to reach it. The concept of coincidence has led to two main approaches to calculating the soft consensus measures, namely, concordance among expert preferences and concordance among individual solutions. In the first approach the coincidence is obtained by evaluating the similarity among the expert preferences, while in the second one the concordance is derived from the measurement of the similarity among the solutions proposed by these experts. This paper performs a comparative study of consensus approaches based on both coincidence approaches. We obtain significant differences between both approaches by comparing several distance functions for measuring expert preferences and a consensus measure over the set of alternatives for measuring the solutions provided by experts. To do so, we use the nonparametric Wilcoxon signed-ranks test. Finally, these outcomes are analyzed using Friedman mean ranks in order to obtain a quantitative classification of the considered measurements according to the convergence criterion considered in the consensus reaching process

    Extending generalised Leland option pricing models: simulation using Monte Carlo

    Get PDF
    To explain option pricing movements, most studies modify the Black-Scholes model by adding other factors. The parametric generalisation, on the other hand, frequently leads to an over-parametrisation problem in the model being constructed. The model's high constraints frequently resulted in considerable underpricing of the option. The nonparametric generalisation of the Black-Scholes-Merton (BSM) model, on the other hand, is prone to both discretisation and truncation issues in pricing options. Thus, this study extends the existing option pricing models by developing Extended Generalised Leland (EGL) models based on the implied adjusted volatility introduced in Leland models. The integrated framework ensures a model-free modelling while conforming to the conventional parametric option pricing. The proposed semiparametric models are developed to incorporate the transaction costs rate factor in the intermediated model-free framework to assure realistic pricing of options. The main focus of this study is to document by simulation that the EGL models deliver option pricing outperformance compared to the benchmark model. The simulation of the EGL models is conducted to investigate whether the proposed models are practical to be applied in a real financial system. Superior option pricing accuracy was observed in the EGL models based on the simulation results. This finding is grounded on the RMSE values as well on pairwise percentage difference values

    An Interval Valued K-Nearest Neighbors Classifier

    Get PDF
    The K-Nearest Neighbors (k-NN) classifier has become a well-known, successful method for pattern classification tasks. In recent years, many enhancements to the original algorithm have been proposed. Fuzzy sets theory has been the basis of several proposed models towards the enhancement of the nearest neighbors rule, being the Fuzzy K-Nearest Neighbors (FuzzyKNN) classifier the most notable procedure in the field. In this work we present a new approach to the nearest neighbor classifier based on the use of interval valued fuzzy sets. The use and implementation of interval values facilitates the membership of the instances and the computation of the votes in a more flexible way than the original FuzzyKNN method, thus improving its adaptability to different supervised learning problems. An experimental study, contrasted by the application of nonparametric statistical procedures, is carried out to ascertain whether the Interval Valued K-Nearest Neighbor (IV-KNN) classifier proposed here is significantly more accurate than k-NN, FuzzyKNN and other fuzzy nearest neighbor classifiers. We conclude that the IV-KNN is indeed significantly more accurate than the rest of classifiers analyzed

    Vagueness, Logic and Use: Four Experimental Studies on Vagueness

    Get PDF
    Although arguments for and against competing theories of vagueness often appeal to claims about the use of vague predicates by ordinary speakers, such claims are rarely tested. An exception is Bonini et al. (1999), who report empirical results on the use of vague predicates by Italian speakers, and take the results to count in favor of epistemicism. Yet several methodological difficulties mar their experiments; we outline these problems and devise revised experiments that do not show the same results. We then describe three additional empirical studies that investigate further claims in the literature on vagueness: the hypothesis that speakers confuse ‘P’ with ‘definitely P’, the relative persuasiveness of different formulations of the inductive premise of the Sorites, and the interaction of vague predicates with three different forms of negatio
    corecore