1,435 research outputs found

    On a regularized solution of the Cauchy problem for matrix factorizations of the Helmholtz equation

    Get PDF
    In this paper, we consider the problem of recovering solutions for matrix factorizations of the Helmholtz equation in a multidimensional bounded domain from their values on a part of the boundary of this domain, i.e., the Cauchy problem. An approximate solution to this problem is constructed based on the Carleman matrix method.Publisher's Versio

    Feature and Decision Level Fusion Using Multiple Kernel Learning and Fuzzy Integrals

    Get PDF
    The work collected in this dissertation addresses the problem of data fusion. In other words, this is the problem of making decisions (also known as the problem of classification in the machine learning and statistics communities) when data from multiple sources are available, or when decisions/confidence levels from a panel of decision-makers are accessible. This problem has become increasingly important in recent years, especially with the ever-increasing popularity of autonomous systems outfitted with suites of sensors and the dawn of the ``age of big data.\u27\u27 While data fusion is a very broad topic, the work in this dissertation considers two very specific techniques: feature-level fusion and decision-level fusion. In general, the fusion methods proposed throughout this dissertation rely on kernel methods and fuzzy integrals. Both are very powerful tools, however, they also come with challenges, some of which are summarized below. I address these challenges in this dissertation. Kernel methods for classification is a well-studied area in which data are implicitly mapped from a lower-dimensional space to a higher-dimensional space to improve classification accuracy. However, for most kernel methods, one must still choose a kernel to use for the problem. Since there is, in general, no way of knowing which kernel is the best, multiple kernel learning (MKL) is a technique used to learn the aggregation of a set of valid kernels into a single (ideally) superior kernel. The aggregation can be done using weighted sums of the pre-computed kernels, but determining the summation weights is not a trivial task. Furthermore, MKL does not work well with large datasets because of limited storage space and prediction speed. These challenges are tackled by the introduction of many new algorithms in the following chapters. I also address MKL\u27s storage and speed drawbacks, allowing MKL-based techniques to be applied to big data efficiently. Some algorithms in this work are based on the Choquet fuzzy integral, a powerful nonlinear aggregation operator parameterized by the fuzzy measure (FM). These decision-level fusion algorithms learn a fuzzy measure by minimizing a sum of squared error (SSE) criterion based on a set of training data. The flexibility of the Choquet integral comes with a cost, however---given a set of N decision makers, the size of the FM the algorithm must learn is 2N. This means that the training data must be diverse enough to include 2N independent observations, though this is rarely encountered in practice. I address this in the following chapters via many different regularization functions, a popular technique in machine learning and statistics used to prevent overfitting and increase model generalization. Finally, it is worth noting that the aggregation behavior of the Choquet integral is not intuitive. I tackle this by proposing a quantitative visualization strategy allowing the FM and Choquet integral behavior to be shown simultaneously

    Fuzzy Esscher changes of measure and copula invariance in Lévy markets

    Get PDF
    In the context of a multidimensional exponential Lévy market, we focus on the Esscher change of measure and suggest a more flexible tool allowing for a fuzzy version of the standard Esscher transform. Motivated both by the empirical incompatibility of market data and the analytical form of the standard Esscher transform (see [8]) and by the desire to introduce a pricing technique under incompleteness conditions, we detect the impact of fuzziness in terms of measure change function and in contingent claims' pricing. In a multidimensional setting the fuzzy Esscher transform is a copula whose invariance, under margins' transformations induced by a change of measure, is investigated and connected to the notion of the absence of arbitrage opportunities. We highlight how Esscher transform, primarily used in pricing techniques, preserves the invariance of the aggregation operator and it can be generalized to the fuzzy version assuming that the measurable functions defining the Choquet marginal integrals are increasing. Furthermore, the empirical evidence seems to suggest that a weaker concept of invariance may be more suitable, i.e. the ε-measure invariance, coherent with the Esscher fuzzy copula tool. An empirical experiment for our model will make clear how this blurring technique fits the market data

    Fuzzy Rough Signatures

    Get PDF

    Approximation Theory and Related Applications

    Get PDF
    In recent years, we have seen a growing interest in various aspects of approximation theory. This happened due to the increasing complexity of mathematical models that require computer calculations and the development of the theoretical foundations of the approximation theory. Approximation theory has broad and important applications in many areas of mathematics, including functional analysis, differential equations, dynamical systems theory, mathematical physics, control theory, probability theory and mathematical statistics, and others. Approximation theory is also of great practical importance, as approximate methods and estimation of approximation errors are used in physics, economics, chemistry, signal theory, neural networks and many other areas. This book presents the works published in the Special Issue "Approximation Theory and Related Applications". The research of the world’s leading scientists presented in this book reflect new trends in approximation theory and related topics

    The Target-Based Utility Model. The role of Copulas and of Non-Additive Measures

    Get PDF
    My studies and my Ph.D. thesis deal with topics that recently emerged in the field of decisions under risk and uncertainty. In particular, I deal with the "target-based approach" to utility theory. A rich literature has been devoted in the last decade to this approach to economic decisions: originally, interest had been focused on the "single-attribute" case and, more recently, extensions to "multi-attribute" case have been studied. This literature is still growing, with a main focus on applied aspects. I will, on the contrary, focus attention on some aspects of theoretical type, related with the multi-attribute case. Various mathematical concepts, such as non-additive measures, aggregation functions, multivariate probability distributions, and notions of stochastic dependence emerge in the formulation and the analysis of target-based models. Notions in the field of non-additive measures and aggregation functions are quite common in the modern economic literature. They have been used to go beyond the classical principle of maximization of expected utility in decision theory. These notions, furthermore, are used in game theory and multi-criteria decision aid. Along my work, on the contrary, I show how non-additive measures and aggregation functions emerge in a natural way in the frame of the target-based approach to classical utility theory, when considering the multi-attribute case. Furthermore they combine with the analysis of multivariate probability distributions and with concepts of stochastic dependence. The concept of copula also constitutes a very important tool for this work, mainly for two purposes. The first one is linked to the analysis of target-based utilities, the other one is in the comparison between classical stochastic order and the concept of "stochastic precedence". This topic finds its application in statistics as well as in the study of Markov Models linked to waiting times to occurrences of words in random sampling of letters from an alphabet. In this work I give a generalization of the concept of stochastic precedence and we discuss its properties on the basis of properties of the connecting copulas of the variables. Along this work I also trace connections to reliability theory, whose aim is studying the lifetime of a system through the analysis of the lifetime of its components. The target-based model finds an application in representing the behavior of the whole system by means of the interaction of its components
    corecore