1,958 research outputs found

    A Fuzzy-based Framework to Support Multicriteria Design of Mechatronic Systems

    Get PDF
    Designing a mechatronic system is a complex task since it deals with a high number of system components with multi-disciplinary nature in the presence of interacting design objectives. Currently, the sequential design is widely used by designers in industries that deal with different domains and their corresponding design objectives separately leading to a functional but not necessarily an optimal result. Consequently, the need for a systematic and multi-objective design methodology arises. A new conceptual design approach based on a multi-criteria profile for mechatronic systems has been previously presented by the authors which uses a series of nonlinear fuzzy-based aggregation functions to facilitate decision-making for design evaluation in the presence of interacting criteria. Choquet fuzzy integrals are one of the most expressive and reliable preference models used in decision theory for multicriteria decision making. They perform a weighted aggregation by the means of fuzzy measures assigning a weight to any coalition of criteria. This enables the designers to model importance and also interactions among criteria thus covering an important range of possible decision outcomes. However, specification of the fuzzy measures involves many parameters and is very difficult when only relying on the designer's intuition. In this paper, we discuss three different methods of fuzzy measure identification tailored for a mechatronic design process and exemplified by a case study of designing a vision-guided quadrotor drone. The results obtained from each method are discussed in the end

    A multi-attribute decision making procedure using fuzzy numbers and hybrid aggregators

    Get PDF
    The classical Analytical Hierarchy Process (AHP) has two limitations. Firstly, it disregards the aspect of uncertainty that usually embedded in the data or information expressed by human. Secondly, it ignores the aspect of interdependencies among attributes during aggregation. The application of fuzzy numbers aids in confronting the former issue whereas, the usage of Choquet Integral operator helps in dealing with the later issue. However, the application of fuzzy numbers into multi-attribute decision making (MADM) demands some additional steps and inputs from decision maker(s). Similarly, identification of monotone measure weights prior to employing Choquet Integral requires huge number of computational steps and amount of inputs from decision makers, especially with the increasing number of attributes. Therefore, this research proposed a MADM procedure which able to reduce the number of computational steps and amount of information required from the decision makers when dealing with these two aspects simultaneously. To attain primary goal of this research, five phases were executed. First, the concept of fuzzy set theory and its application in AHP were investigated. Second, an analysis on the aggregation operators was conducted. Third, the investigation was narrowed on Choquet Integral and its associate monotone measure. Subsequently, the proposed procedure was developed with the convergence of five major components namely Factor Analysis, Fuzzy-Linguistic Estimator, Choquet Integral, Mikhailov‘s Fuzzy AHP, and Simple Weighted Average. Finally, the feasibility of the proposed procedure was verified by solving a real MADM problem where the image of three stores located in Sabak Bernam, Selangor, Malaysia was analysed from the homemakers‘ perspective. This research has a potential in motivating more decision makers to simultaneously include uncertainties in human‘s data and interdependencies among attributes when solving any MADM problems

    A short survey on the usage of choquet integral and its associated fuzzy measure in multiple attribute analysis

    Get PDF
    Choquet integral operator is currently making inroads into many real multiple attribute analysis due to its ability on modeling the usual interactions held by the attributes during the aggregation process.Unfortunately, the process of identifying 2n values of fuzzy measure prior to employing Choquet integral normally turns into a very complex one with the increasing number of attributes, n.On that note, this paper mainly reviews on some of the methods that have been proposed in reducing the complexity of identifying fuzzy measure values together with their pros and cons. The paper begins with a discussion on the aggregation process in multiple attribute analysis which then focuses on the usage of Choquet integral and its associated fuzzy measure before investigating some of the fuzzy measure identification methods A simple numerical example to demonstrate the merit of using Choquet integral and the indications for future research are provided as well.The paper to some extent would be helpful in stimulating new ideas for developing simpler or enhanced versions of fuzzy measure identification methods

    Fuzzy-TLX: using fuzzy integrals for evaluating human mental workload with NASA-Task Load indeX in laboratory and field studies

    Get PDF
    International audienceThe aim of this study was to assess mental workload in which various load sources must be integrated to derive reliable workload estimates. We report a new algorithm for computing weights from qualitative fuzzy integrals and apply it to the National Aeronautics and Space Administration -Task Load indeX (NASA-TLX) subscales in order to replace the standard pair-wise weighting technique (PWT). In this paper, two empirical studies were reported: (1) In a laboratory experiment, age- and task-related variables were investigated in 53 male volunteers and (2) In a field study, task- and job-related variables were studied on aircrews during 48 commercial flights. The results found in this study were as follows: (i) in the experimental setting, fuzzy estimates were highly correlated with classical (using PWT) estimates; (ii) in real work conditions, replacing PWT by automated fuzzy treatments simplified the NASA-TLX completion; (iii) the algorithm for computing fuzzy estimates provides a new classification procedure sensitive to various variables of work environments and (iv) subjective and objective measures can be used for the fuzzy aggregation of NASA-TLX subscales

    EXPLAINABLE FEATURE- AND DECISION-LEVEL FUSION

    Get PDF
    Information fusion is the process of aggregating knowledge from multiple data sources to produce more consistent, accurate, and useful information than any one individual source can provide. In general, there are three primary sources of data/information: humans, algorithms, and sensors. Typically, objective data---e.g., measurements---arise from sensors. Using these data sources, applications such as computer vision and remote sensing have long been applying fusion at different levels (signal, feature, decision, etc.). Furthermore, the daily advancement in engineering technologies like smart cars, which operate in complex and dynamic environments using multiple sensors, are raising both the demand for and complexity of fusion. There is a great need to discover new theories to combine and analyze heterogeneous data arising from one or more sources. The work collected in this dissertation addresses the problem of feature- and decision-level fusion. Specifically, this work focuses on fuzzy choquet integral (ChI)-based data fusion methods. Most mathematical approaches for data fusion have focused on combining inputs relative to the assumption of independence between them. However, often there are rich interactions (e.g., correlations) between inputs that should be exploited. The ChI is a powerful aggregation tool that is capable modeling these interactions. Consider the fusion of m sources, where there are 2m unique subsets (interactions); the ChI is capable of learning the worth of each of these possible source subsets. However, the complexity of fuzzy integral-based methods grows quickly, as the number of trainable parameters for the fusion of m sources scales as 2m. Hence, we require a large amount of training data to avoid the problem of over-fitting. This work addresses the over-fitting problem of ChI-based data fusion with novel regularization strategies. These regularization strategies alleviate the issue of over-fitting while training with limited data and also enable the user to consciously push the learned methods to take a predefined, or perhaps known, structure. Also, the existing methods for training the ChI for decision- and feature-level data fusion involve quadratic programming (QP). The QP-based learning approach for learning ChI-based data fusion solutions has a high space complexity. This has limited the practical application of ChI-based data fusion methods to six or fewer input sources. To address the space complexity issue, this work introduces an online training algorithm for learning ChI. The online method is an iterative gradient descent approach that processes one observation at a time, enabling the applicability of ChI-based data fusion on higher dimensional data sets. In many real-world data fusion applications, it is imperative to have an explanation or interpretation. This may include providing information on what was learned, what is the worth of individual sources, why a decision was reached, what evidence process(es) were used, and what confidence does the system have on its decision. However, most existing machine learning solutions for data fusion are black boxes, e.g., deep learning. In this work, we designed methods and metrics that help with answering these questions of interpretation, and we also developed visualization methods that help users better understand the machine learning solution and its behavior for different instances of data
    • 

    corecore