70 research outputs found

    Multi crteria decision making and its applications : a literature review

    Get PDF
    This paper presents current techniques used in Multi Criteria Decision Making (MCDM) and their applications. Two basic approaches for MCDM, namely Artificial Intelligence MCDM (AIMCDM) and Classical MCDM (CMCDM) are discussed and investigated. Recent articles from international journals related to MCDM are collected and analyzed to find which approach is more common than the other in MCDM. Also, which area these techniques are applied to. Those articles are appearing in journals for the year 2008 only. This paper provides evidence that currently, both AIMCDM and CMCDM are equally common in MCDM

    The posterity of Zadeh's 50-year-old paper: A retrospective in 101 Easy Pieces – and a Few More

    Get PDF
    International audienceThis article was commissioned by the 22nd IEEE International Conference of Fuzzy Systems (FUZZ-IEEE) to celebrate the 50th Anniversary of Lotfi Zadeh's seminal 1965 paper on fuzzy sets. In addition to Lotfi's original paper, this note itemizes 100 citations of books and papers deemed “important (significant, seminal, etc.)” by 20 of the 21 living IEEE CIS Fuzzy Systems pioneers. Each of the 20 contributors supplied 5 citations, and Lotfi's paper makes the overall list a tidy 101, as in “Fuzzy Sets 101”. This note is not a survey in any real sense of the word, but the contributors did offer short remarks to indicate the reason for inclusion (e.g., historical, topical, seminal, etc.) of each citation. Citation statistics are easy to find and notoriously erroneous, so we refrain from reporting them - almost. The exception is that according to Google scholar on April 9, 2015, Lotfi's 1965 paper has been cited 55,479 times

    Failure Prognosis of Wind Turbine Components

    Get PDF
    Wind energy is playing an increasingly significant role in the World\u27s energy supply mix. In North America, many utility-scale wind turbines are approaching, or are beyond the half-way point of their originally anticipated lifespan. Accurate estimation of the times to failure of major turbine components can provide wind farm owners insight into how to optimize the life and value of their farm assets. This dissertation deals with fault detection and failure prognosis of critical wind turbine sub-assemblies, including generators, blades, and bearings based on data-driven approaches. The main aim of the data-driven methods is to utilize measurement data from the system and forecast the Remaining Useful Life (RUL) of faulty components accurately and efficiently. The main contributions of this dissertation are in the application of ALTA lifetime analysis to help illustrate a possible relationship between varying loads and generators reliability, a wavelet-based Probability Density Function (PDF) to effectively detecting incipient wind turbine blade failure, an adaptive Bayesian algorithm for modeling the uncertainty inherent in the bearings RUL prediction horizon, and a Hidden Markov Model (HMM) for characterizing the bearing damage progression based on varying operating states to mimic a real condition in which wind turbines operate and to recognize that the damage progression is a function of the stress applied to each component using data from historical failures across three different Canadian wind farms

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Appropriate choice of aggregation operators in fuzzy decision support systems

    Get PDF
    Fuzzy logic provides a mathematical formalism for a unified treatment of vagueness and imprecision that are ever present in decision support and expert systems in many areas. The choice of aggregation operators is crucial to the behavior of the system that is intended to mimic human decision making. This paper discusses how aggregation operators can be selected and adjusted to fit empirical data&mdash;a series of test cases. Both parametric and nonparametric regression are considered and compared. A practical application of the proposed methods to electronic implementation of clinical guidelines is presented<br /

    Explainable contextual data driven fusion

    Get PDF
    Numerous applications require the intelligent combining of disparate sensor data streams to create a more complete and enhanced observation in support of underlying tasks like classification, regression, or decision making. This presentation is focused on two underappreciated and often overlooked parts of information fusion, explainability and context. Due to the rapidly increasing deployment and complexity of machine learning solutions, it is critical that the humans who deploy these algorithms can understand why and how a given algorithm works, as well as be able to determine when an algorithm is suitable for use in a particular instance of the problem. The first half of this paper outlines a new similarity measure for capacities and integrals. This measure is used to compare machine learned fusion solutions and explain what a single fusion solution learned. The second half of the paper is focused on contextual fusion with respect to incomplete (limited knowledge) models and metadata for unmanned aerial vehicles (UAVs). Example UAV metadata includes platform (e.g., GPS, IMU, etc.) and environmental (e.g., weather, solar position, etc.) data. Incomplete models herein are a result of limitations of machine learning related to under-sampling of training data. To address these challenges, a new contextually adaptive online Choquet integral is outlined

    Incorporating fuzzy-based methods to deep learning models for semantic segmentation

    Get PDF
    This thesis focuses on improving the workflow of semantic segmentation through a combination of reducing model complexity, improving segmentation accuracy, and making semantic segmentation results more reliable and robust. Semantic segmentation refers to pixel-level classification, the objective of which is to classify each pixel of the input image into different categories. The process typically consists of three steps: model construction, training, and application. Thus, in this thesis, fuzzy-based techniques are utilized in the aforementioned three steps to improve semantic segmentation workflow . The widely-used semantic segmentation models normally extract and aggregate spatial information and channel-wise features simultaneously. In order to achieve promising segmentation performance, it is required to involve numerous learnable parameters, which increase the model's complexity. Thus, decoupling the information fusion tasks is an important approach in the exploration of semantic segmentation models. Fuzzy integrals are effective for fusing information, and some special fuzzy integral operators (OWA) are free of parameters and easy to implement in deep-learning models. Therefore, a novel fuzzy integral module that includes an additional convolutional layer for feature map dimensionality reduction and an OWA layer for information fusion across feature channels is designed. The proposed fuzzy integral module can be flexibly integrated into existing semantic segmentation models, and then help reduce parameters and save memory. Following the exploration of semantic segmentation models, the collected data is used to train the model. Note that the precise delineation of object boundaries is a key aspect of semantic segmentation. In order to make the segmentation model pay more attention to the boundary, a special boundary-wise loss function is desirable in the segmentation model training phase. Fuzzy rough sets are normally utilized to measure the relationship between two sets. Thus, in this thesis, to improve the boundary accuracy, fuzzy rough sets are leveraged to calculate a boundary-wise loss, which is the difference between the boundary sets of the predicted image and the ground truth image. After completing the training process with the proposed novel loss, the next step for semantic segmentation is to apply the pre-trained segmentation model to segment new images. One challenge is that there are no ground truth images to quantify the segmentation quality in the real-world application of semantic segmentation models. Therefore, it is crucial to design a quality quantification algorithm to infer image-level segmentation performance and improve the credibility of semantic segmentation models. In this thesis, a novel quality quantification algorithm based on fuzzy uncertainty is proposed as part of the model inference process without accessing ground truth images. Moreover, to further explore the practical application of the proposed quality quantification algorithm in clinical settings, this thesis goes beyond public datasets and delves into a real-world case study involving cardiac MRI segmentation. Additionally, as clinicians also provide the level of uncertainty to measure their confidence when annotating to generate ground truth images (human-based uncertainty), the correlation between human-based uncertainty and AI-based uncertainty (calculated by the proposed quality quantification algorithm) is deeply investigated. Comprehensive experiments are conducted in this thesis to demonstrate that the integration of fuzzy-based technologies can enhance the efficiency, accuracy, and reliability of semantic segmentation models compared to those without such methods

    Incorporating fuzzy-based methods to deep learning models for semantic segmentation

    Get PDF
    This thesis focuses on improving the workflow of semantic segmentation through a combination of reducing model complexity, improving segmentation accuracy, and making semantic segmentation results more reliable and robust. Semantic segmentation refers to pixel-level classification, the objective of which is to classify each pixel of the input image into different categories. The process typically consists of three steps: model construction, training, and application. Thus, in this thesis, fuzzy-based techniques are utilized in the aforementioned three steps to improve semantic segmentation workflow . The widely-used semantic segmentation models normally extract and aggregate spatial information and channel-wise features simultaneously. In order to achieve promising segmentation performance, it is required to involve numerous learnable parameters, which increase the model's complexity. Thus, decoupling the information fusion tasks is an important approach in the exploration of semantic segmentation models. Fuzzy integrals are effective for fusing information, and some special fuzzy integral operators (OWA) are free of parameters and easy to implement in deep-learning models. Therefore, a novel fuzzy integral module that includes an additional convolutional layer for feature map dimensionality reduction and an OWA layer for information fusion across feature channels is designed. The proposed fuzzy integral module can be flexibly integrated into existing semantic segmentation models, and then help reduce parameters and save memory. Following the exploration of semantic segmentation models, the collected data is used to train the model. Note that the precise delineation of object boundaries is a key aspect of semantic segmentation. In order to make the segmentation model pay more attention to the boundary, a special boundary-wise loss function is desirable in the segmentation model training phase. Fuzzy rough sets are normally utilized to measure the relationship between two sets. Thus, in this thesis, to improve the boundary accuracy, fuzzy rough sets are leveraged to calculate a boundary-wise loss, which is the difference between the boundary sets of the predicted image and the ground truth image. After completing the training process with the proposed novel loss, the next step for semantic segmentation is to apply the pre-trained segmentation model to segment new images. One challenge is that there are no ground truth images to quantify the segmentation quality in the real-world application of semantic segmentation models. Therefore, it is crucial to design a quality quantification algorithm to infer image-level segmentation performance and improve the credibility of semantic segmentation models. In this thesis, a novel quality quantification algorithm based on fuzzy uncertainty is proposed as part of the model inference process without accessing ground truth images. Moreover, to further explore the practical application of the proposed quality quantification algorithm in clinical settings, this thesis goes beyond public datasets and delves into a real-world case study involving cardiac MRI segmentation. Additionally, as clinicians also provide the level of uncertainty to measure their confidence when annotating to generate ground truth images (human-based uncertainty), the correlation between human-based uncertainty and AI-based uncertainty (calculated by the proposed quality quantification algorithm) is deeply investigated. Comprehensive experiments are conducted in this thesis to demonstrate that the integration of fuzzy-based technologies can enhance the efficiency, accuracy, and reliability of semantic segmentation models compared to those without such methods

    Fuzzy Logic and Its Uses in Finance: A Systematic Review Exploring Its Potential to Deal with Banking Crises

    Get PDF
    The major success of fuzzy logic in the field of remote control opened the door to its application in many other fields, including finance. However, there has not been an updated and comprehensive literature review on the uses of fuzzy logic in the financial field. For that reason, this study attempts to critically examine fuzzy logic as an effective, useful method to be applied to financial research and, particularly, to the management of banking crises. The data sources were Web of Science and Scopus, followed by an assessment of the records according to pre-established criteria and an arrangement of the information in two main axes: financial markets and corporate finance. A major finding of this analysis is that fuzzy logic has not yet been used to address banking crises or as an alternative to ensure the resolvability of banks while minimizing the impact on the real economy. Therefore, we consider this article relevant for supervisory and regulatory bodies, as well as for banks and academic researchers, since it opens the door to several new research axes on banking crisis analyses using artificial intelligence techniques
    • 

    corecore