99 research outputs found

    Fuzzy fusion techniques for linear features detection in multitemporal SAR images

    Full text link

    Fuzzy Operator Trees for Modeling Utility Functions

    Get PDF
    In this thesis, we propose a method for modeling utility (rating) functions based on a novel concept called textbf{Fuzzy Operator Tree} (FOT for short). As the notion suggests, this method makes use of techniques from fuzzy set theory and implements a fuzzy rating function, that is, a utility function that maps to the unit interval, where 00 corresponds to the lowest and 11 to the highest evaluation. Even though the original motivation comes from quality control, FOTs are completely general and widely applicable. Our approach allows a human expert to specify a model in the form of an FOT in a quite convenient and intuitive way. To this end, he simply has to split evaluation criteria into sub-criteria in a recursive manner, and to determine in which way these sub-criteria ought to be combined: conjunctively, disjunctively, or by means of an averaging operator. The result of this process is the qualitative structure of the model. A second step, then, it is to parameterize the model. To support or even free the expert form this step, we develop a method for calibrating the model on the basis of exemplary ratings, that is, in a purely data-driven way. This method, which makes use of optimization techniques from the field of evolutionary algorithms, constitutes the second major contribution of the thesis. The third contribution of the thesis is a method for evaluating an FOT in a cost-efficient way. Roughly speaking, an FOT can be seen as an aggregation function that combines the evaluations of a number of basic criteria into an overall rating of an object. Essentially, the cost of computing this rating is hence given by sum of the evaluation costs of the basic criteria. In practice, however, the precise utility degree is often not needed. Instead, it is enough to know whether it lies above or below an important threshold value. In such cases, the evaluation process, understood as a sequential evaluation of basic criteria, can be stopped as soon as this question can be answered in a unique way. Of course, the (expected) number of basic criteria and, therefore, the (expected) evaluation cost will then strongly depend on the order of the evaluations, and this is what is optimized by the methods that we have developed

    Evaluation of Metaverse integration of freight fluidity measurement alternatives using fuzzy Dombi EDAS model

    Get PDF
    Developments in transportation systems, changes in consumerism trends, and conditions such as COVID-19 have increased both the demand and the load on freight transportation. Since various companies are transporting goods all over the world to evaluate the sustainability, speed, and resiliency of freight transportation systems, data and freight fluidity measurement systems are needed. In this study, an integrated decision-making model is proposed to advantage prioritize the freight fluidity measurement alternatives. The proposed model is composed of two main stages. In the first stage, the Dombi norms based Logarithmic Methodology of Additive Weights (LMAW) is used to find the weights of criteria. In the second phase, an extended Evaluation based on the Distance from Average Solution (EDAS) method with Dombi unction for aggregation is presented to determine the final ranking results of alternatives. Three freight fluidity measurement alternatives are proposed, namely doing nothing, integrating freight activities into Metaverse for measuring fluidity, and forming global governance of freight activities for measuring fluidity through available data. Thirteen criteria, which are grouped under four main aspects namely technology, governance, efficiency, and environmental sustainability, and a case study at which a ground framework is formed for the experts to evaluate the alternatives considering the criteria are used in the multi-criteria decision-making process. The results of the study indicate that integrating freight activities into Metaverse for measuring fluidity is the most advantageous alternative, whereas doing nothing is the least advantageous one

    Uncertainty-aware video visual analytics of tracked moving objects

    Get PDF
    Vast amounts of video data render manual video analysis useless while recent automatic video analytics techniques suffer from insufficient performance. To alleviate these issues we present a scalable and reliable approach exploiting the visual analytics methodology. This involves the user in the iterative process of exploration hypotheses generation and their verification. Scalability is achieved by interactive filter definitions on trajectory features extracted by the automatic computer vision stage. We establish the interface between user and machine adopting the VideoPerpetuoGram (VPG) for visualization and enable users to provide filter-based relevance feedback. Additionally users are supported in deriving hypotheses by context-sensitive statistical graphics. To allow for reliable decision making we gather uncertainties introduced by the computer vision step communicate these information to users through uncertainty visualization and grant fuzzy hypothesis formulation to interact with the machine. Finally we demonstrate the effectiveness of our approach by the video analysis mini challenge which was part of the IEEE Symposium on Visual Analytics Science and Technology 2009

    Blind restoration of images with penalty-based decision making : a consensus approach

    Get PDF
    In this thesis we show a relationship between fuzzy decision making and image processing . Various applications for image noise reduction with consensus methodology are introduced. A new approach is introduced to deal with non-stationary Gaussian noise and spatial non-stationary noise in MRI

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    Evaluating the Impact of Defeasible Argumentation as a Modelling Technique for Reasoning under Uncertainty

    Get PDF
    Limited work exists for the comparison across distinct knowledge-based approaches in Artificial Intelligence (AI) for non-monotonic reasoning, and in particular for the examination of their inferential and explanatory capacity. Non-monotonicity, or defeasibility, allows the retraction of a conclusion in the light of new information. It is a similar pattern to human reasoning, which draws conclusions in the absence of information, but allows them to be corrected once new pieces of evidence arise. Thus, this thesis focuses on a comparison of three approaches in AI for implementation of non-monotonic reasoning models of inference, namely: expert systems, fuzzy reasoning and defeasible argumentation. Three applications from the fields of decision-making in healthcare and knowledge representation and reasoning were selected from real-world contexts for evaluation: human mental workload modelling, computational trust modelling, and mortality occurrence modelling with biomarkers. The link between these applications comes from their presumptively non-monotonic nature. They present incomplete, ambiguous and retractable pieces of evidence. Hence, reasoning applied to them is likely suitable for being modelled by non-monotonic reasoning systems. An experiment was performed by exploiting six deductive knowledge bases produced with the aid of domain experts. These were coded into models built upon the selected reasoning approaches and were subsequently elicited with real-world data. The numerical inferences produced by these models were analysed according to common metrics of evaluation for each field of application. For the examination of explanatory capacity, properties such as understandability, extensibility, and post-hoc interpretability were meticulously described and qualitatively compared. Findings suggest that the variance of the inferences produced by expert systems and fuzzy reasoning models was higher, highlighting poor stability. In contrast, the variance of argument-based models was lower, showing a superior stability of its inferences across different system configurations. In addition, when compared in a context with large amounts of conflicting information, defeasible argumentation exhibited a stronger potential for conflict resolution, while presenting robust inferences. An in-depth discussion of the explanatory capacity showed how defeasible argumentation can lead to the construction of non-monotonic models with appealing properties of explainability, compared to those built with expert systems and fuzzy reasoning. The originality of this research lies in the quantification of the impact of defeasible argumentation. It illustrates the construction of an extensive number of non-monotonic reasoning models through a modular design. In addition, it exemplifies how these models can be exploited for performing non-monotonic reasoning and producing quantitative inferences in real-world applications. It contributes to the field of non-monotonic reasoning by situating defeasible argumentation among similar approaches through a novel empirical comparison

    Advances and Applications of Dezert-Smarandache Theory (DSmT) for Information Fusion (Collected works), Vol. 2

    Get PDF
    This second volume dedicated to Dezert-Smarandache Theory (DSmT) in Information Fusion brings in new fusion quantitative rules (such as the PCR1-6, where PCR5 for two sources does the most mathematically exact redistribution of conflicting masses to the non-empty sets in the fusion literature), qualitative fusion rules, and the Belief Conditioning Rule (BCR) which is different from the classical conditioning rule used by the fusion community working with the Mathematical Theory of Evidence. Other fusion rules are constructed based on T-norm and T-conorm (hence using fuzzy logic and fuzzy set in information fusion), or more general fusion rules based on N-norm and N-conorm (hence using neutrosophic logic and neutrosophic set in information fusion), and an attempt to unify the fusion rules and fusion theories. The known fusion rules are extended from the power set to the hyper-power set and comparison between rules are made on many examples. One defines the degree of intersection of two sets, degree of union of two sets, and degree of inclusion of two sets which all help in improving the all existing fusion rules as well as the credibility, plausibility, and communality functions. The book chapters are written by Frederic Dambreville, Milan Daniel, Jean Dezert, Pascal Djiknavorian, Dominic Grenier, Xinhan Huang, Pavlina Dimitrova Konstantinova, Xinde Li, Arnaud Martin, Christophe Osswald, Andrew Schumann, Tzvetan Atanasov Semerdjiev, Florentin Smarandache, Albena Tchamova, and Min Wang
    corecore