168 research outputs found

    Developing a green city assessment system using cognitive maps and the Choquet integral

    Get PDF
    Equitable human well-being and environmental concerns in urban areas have, over the years, become increasingly challenging issues. This trend is related to both the complexity inherent in the multiple factors to be considered when evaluating eco-friendly cities (i.e., green cities) and the way this type of city’s sustainability depends on many evaluation criteria, which hampers all decision-making processes. Using a multiple criteria decision analysis (MCDA) approach, this study sought to develop a multiple-criteria model that facilitates the evaluation of green cities’ sustainability, based on cognitive mapping techniques and the Choquet integral (CI). Taking a constructivist and process-oriented stance, the research included identifying evaluation criteria and their respective interactions using a panel of experts with specialized knowledge in the subject under analysis. The resulting framework and its application were validated both by the panel members and a parliamentary representative of the Portuguese ecology party “Os Verdes” (The Greens), who confirmed that the evaluation system created distinguishes between cities according to how strongly they adhere to “green” principles. The advantages and limitations of the proposed framework are also discussed.info:eu-repo/semantics/acceptedVersio

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data

    Densification of spatially-sparse legacy soil data at a national scale: a digital mapping approach

    Get PDF
    Digital soil mapping (DSM) is a viable approach to providing spatial soil information but its adoption at the national scale, especially in sub-Saharan Africa, is limited by low spread of data. Therefore, the focus of this thesis is on optimizing DSM techniques for densification of sparse legacy soil data using Nigeria as a case study. First, the robustness of Random Forest model (RFM) was tested in predicting soil particle-size fractions as a compositional data using additive log-ratio technique. Results indicated good prediction accuracy with RFM while soils are largely coarse-textured especially in the northern region. Second, soil organic carbon (SOC) and bulk density (BD) were predicted from which SOC density and stock were calculated. These were overlaid with land use/land cover (LULC), agro-ecological zone (AEZ) and soil maps to quantify the carbon sequestration of soils and their variation across different AEZs. Results showed that 6.5 Pg C with an average of 71.60 Mg C ha–1 abound in the top 1 m soil depth. Furthermore, to improve the performance of BD and effective cation exchange capacity (ECEC) pedotransfer functions (PTFs), the inclusion of environmental data was explored using multiple linear regression (MLR) and RFM. Results showed an increase in performance of PTFs with the use of soil and environmental data. Finally, the application of Choquet fuzzy integral (CI) technique in irrigation suitability assessment was assessed. This was achieved through multi-criteria analysis of soil, climatic, landscape and socio-economic indices. Results showed that CI is a better aggregation operator compared to weighted mean technique. A total of 3.34 x 106 ha is suitable for surface irrigation in Nigeria while major limitations are due to topographic and soil attributes. Research findings will provide quantitative basis for framing appropriate policies on sustainable food production and environmental management, especially in resource-poor countries of the world

    Scores for Multivariate Distributions and Level Sets

    Get PDF
    Forecasts of multivariate probability distributions are required for a variety of applications. Scoring rules enable the evaluation of forecast accuracy and comparison between forecasting methods. We propose a theoretical framework for scoring rules for multivariate distributions that encompasses the existing quadratic score and multivariate continuous ranked probability score. We demonstrate how this framework can be used to generate new scoring rules. In some multivariate contexts, it is a forecast of a level set that is needed, such as a density level set for anomaly detection or the level set of the cumulative distribution as a measure of risk. This motivates consideration of scoring functions for such level sets. For univariate distributions, it is well established that the continuous ranked probability score can be expressed as the integral over a quantile score. We show that, in a similar way, scoring rules for multivariate distributions can be decomposed to obtain scoring functions for level sets. Using this, we present scoring functions for different types of level sets, including density level sets and level sets for cumulative distributions. To compute the scores, we propose a simple numerical algorithm. We perform a simulation study to support our proposals, and we use real data to illustrate usefulness for forecast combining and conditional value at risk estimation

    Scores for Multivariate Distributions and Level Sets

    Get PDF
    Forecasts of multivariate probability distributions are required for a variety of applications. Scoring rules enable the evaluation of forecast accuracy and comparison between forecasting methods. We propose a theoretical framework for scoring rules for multivariate distributions that encompasses the existing quadratic score and multivariate continuous ranked probability score. We demonstrate how this framework can be used to generate new scoring rules. In some multivariate contexts, it is a forecast of a level set that is needed, such as a density level set for anomaly detection or the level set of the cumulative distribution as a measure of risk. This motivates consideration of scoring functions for such level sets. For univariate distributions, it is well established that the continuous ranked probability score can be expressed as the integral over a quantile score. We show that, in a similar way, scoring rules for multivariate distributions can be decomposed to obtain scoring functions for level sets. Using this, we present scoring functions for different types of level sets, including density level sets and level sets for cumulative distributions. To compute the scores, we propose a simple numerical algorithm. We perform a simulation study to support our proposals, and we use real data to illustrate usefulness for forecast combining and conditional value at risk estimation

    Axiomatizations of the Choquet integral on general decision spaces

    Get PDF
    PhDWe propose an axiomatization of the Choquet integral model for the general case of a heterogeneous product set X = X1 Xn. Previous characterizations of the Choquet integral have been given for the particular cases X = Y n and X = Rn. However, this makes the results inapplicable to problems in many fields of decision theory, such as multicriteria decision analysis (MCDA), state-dependent utility (SD-DUU), and social choice. For example, in multicriteria decision analysis the elements of X are interpreted as alternatives, characterized by criteria taking values from the sets Xi. Obviously, the identicalness or even commensurateness of criteria cannot be assumed a priori. Despite this theoretical gap, the Choquet integral model is quite popular in the MCDA community and is widely used in applied and theoretical works. In fact, the absence of a sufficiently general axiomatic treatment of the Choquet integral has been recognized several times in the decision-theoretic literature. In our work we aim to provide missing results { we construct the axiomatization based on a novel axiomatic system and study its uniqueness properties. Also, we extend our construction to various particular cases of the Choquet integral and analyse the constraints of the earlier characterizations. Finally, we discuss in detail the implications of our results for the applications of the Choquet integral as a model of decision making

    Ambiguity in asset pricing and portfolio choice: a review of the literature

    Get PDF
    A growing body of empirical evidence suggests that investors’ behavior is not well described by the traditional paradigm of (subjective) expected utility maximization under rational expectations. A literature has arisen that models agents whose choices are consistent with models that are less restrictive than the standard subjective expected utility framework. In this paper we conduct a survey of the existing literature that has explored the implications of decision-making under ambiguity for financial market outcomes, such as portfolio choice and equilibrium asset prices. We conclude that the ambiguity literature has led to a number of significant advances in our ability to rationalize empirical features of asset returns and portfolio decisions, such as the empirical failure of the two-fund separation theorem in portfolio decisions, the modest exposure to risky securities observed for a majority of investors, the home equity preference in international portfolio diversification, the excess volatility of asset returns, the equity premium and the risk-free rate puzzles, and the occurrence of trading break-downs.Capital assets pricing model ; Investments

    Fuzzy Sets, Fuzzy Logic and Their Applications 2020

    Get PDF
    The present book contains the 24 total articles accepted and published in the Special Issue “Fuzzy Sets, Fuzzy Logic and Their Applications, 2020” of the MDPI Mathematics journal, which covers a wide range of topics connected to the theory and applications of fuzzy sets and systems of fuzzy logic and their extensions/generalizations. These topics include, among others, elements from fuzzy graphs; fuzzy numbers; fuzzy equations; fuzzy linear spaces; intuitionistic fuzzy sets; soft sets; type-2 fuzzy sets, bipolar fuzzy sets, plithogenic sets, fuzzy decision making, fuzzy governance, fuzzy models in mathematics of finance, a philosophical treatise on the connection of the scientific reasoning with fuzzy logic, etc. It is hoped that the book will be interesting and useful for those working in the area of fuzzy sets, fuzzy systems and fuzzy logic, as well as for those with the proper mathematical background and willing to become familiar with recent advances in fuzzy mathematics, which has become prevalent in almost all sectors of the human life and activity
    corecore