7 research outputs found

    Fuzzy fusion techniques for linear features detection in multitemporal SAR images

    Full text link

    Development of an Aggregation Methodology for Risk Analysis in Aerospace Conceptual Vehicle Design

    Get PDF
    The growing complexity of technical systems has emphasized a need to gather as much information as possible regarding specific systems of interest in order to make robust, sound decisions about their design and deployment. Acquiring as much data as possible requires the use of empirical statistics, historical information and expert opinion. In much of the aerospace conceptual design environment, the lack of historical information and infeasibility of gathering empirical data relegates the data collection to expert opinion. The conceptual design of a space vehicle requires input from several disciplines (weights and sizing, operations, trajectory, etc.). In this multidisciplinary environment, the design variables are often not easily quantified and have a high degree of uncertainty associated with their values. Decision-makers must rely on expert assessments of the uncertainty associated with the design variables to evaluate the risk level of a conceptual design. Since multiple experts are often queried for their evaluation of uncertainty, a means to combine/aggregate multiple expert assessments must be developed. Providing decision-makers with a solitary assessment that captures the consensus of the multiple experts would greatly enhance the ability to evaluate risk associated with a conceptual design. The objective of this research has been to develop an aggregation methodology that efficiently combines the uncertainty assessments of multiple experts in multiple disciplines involved in aerospace conceptual design. Bayesian probability augmented by uncertainty modeling and expert calibration was employed in the methodology construction. Appropriate questionnaire techniques were used to acquire expert opinion; the responses served as input distributions to the aggregation algorithm. Application of the derived techniques were applied as part of a larger expert assessment elicitation and calibration study. Results of this research demonstrate that aggregation of uncertainty assessments in environments where likelihood functions and empirically assessed expert credibility factors are deficient is possible. Validation of the methodology provides evidence that decision-makers find the aggregated responses useful in formulating decision strategies

    Geo-uninorm Consistency Control Module for Preference Similarity Network Hierarchical Clustering Based Consensus Model

    Get PDF
    The file attached to this record is the author's final peer reviewed version. The Publisher's final version can be found by following the DOI link."In order to avoid misleading decision solutions in group decision making (GDM) processes, in addition to consensus, which is ob- viously desirable to guarantee that the group of experts accept the final decision solution, consistency of information should also be sought after. For experts’ preferences represented by reciprocal fuzzy preference relations, consistency is linked to the transitivity property. In this study, we put forward a new consensus approach to solve GDM with reciprocal preference relations that imple- ments rationality criteria of consistency based on the transitivity property with the following twofold aim prior to finding the final decision solution: (A) to develop a consistency control module to provide personalized consistency feedback to inconsistent experts in the GDM problem to guarantee the consistency of preferences; and (B) to design a consistent preference network clustering based consensus measure based on an undirected weighted consistent preference similarity network structure with undirected complete links, which using the concept of structural equivalence will allow one to (i) cluster the experts; and (ii) measure their consensus status. Based on the uninorm characterization of consistency of reciprocal preferences relations and the geometric average, we propose the implementation of the geo-uninorm operator to derive a consistent based preference relation from a given reciprocal preference relation. This is subsequently used to measure the consistency level of a given preference relation as the cosine simi- larity between the respective relations’ essential vectors of preference intensity. The proposed geo-uninorm consistency measure will allow the building of a consistency control module based on a personalized feedback mechanism to be implemented when the consistency level is insufficient. This consistency control module has two advantages: (1) it guarantees consistency by advising inconsistent expert(s) to modify their preferences with minimum changes; and (2) it provides fair recommendations individually, depending on the experts’ personal level of inconsistency. Once consistency of preferences is guaranteed, a structural equivalence preference similarity network is constructed. For the purpose of representing structurally equivalent experts and measuring consen- sus within the group of experts, we develop an agglomerative hierarchical clustering based consensus algorithm, which can be used as a visualization tool in monitoring current state of experts’ group agreement and in controlling the decision making process. The proposed model is validated with a comparative analysis with an existing literature study, from which conclusions are drawn and explained

    Under Uncertainty Trust Estimation in Multi-Valued Settings

    Get PDF
    Social networking sites have developed considerably during the past couple of years. However, few websites exploit the potentials of combining the social networking sites with online markets. This, in turn, would help users to distinguish and engage into interaction with other unknown, yet trustworthy, users in the market. In this thesis, we develop a model to estimate the trust of unknown agents in a multi-agent system where agents engage into business-oriented interactions with each other. The proposed trust model estimates the degree of trustworthiness of an unknown target agent through the information acquired from a group of advisor agents, who had direct interactions with the target agent. This problem is addressed when: (1) the trust of both advisor and target agents is subject to some uncertainty; (2) the advisor agents are self-interested and provide misleading accounts of their past experiences with the target agents; and (3) the outcome of each interaction between the agents is multi-valued. We use possibility distributions to model trust with respect to its uncertainties thanks to its potential capability of modeling uncertainty arisen from both variability and ignorance. Moreover, we propose trust estimation models to approximate the degree of trustworthiness of an unknown target agent in the two following problems: (1) in the first problem, the advisor agents are assumed to be unknown and have an unknown level of trustworthiness; and (2) in the second problem, however, some interactions are carried out with the advisor agents and their trust distributions are modeled. In addition, a certainty metric is proposed in the possibilistic domain, measuring the confidence of an agent in the reports of its advisors which considers the consistency in the advisors’ reported information and the advisors’ degree of trustworthiness. Finally, we validate the proposed approaches through extensive experiments in various settings

    Computational methods for physiological data

    Get PDF
    Thesis (Ph. D.)--Harvard-MIT Division of Health Sciences and Technology, 2009.Author is also affiliated with the MIT Dept. of Electrical Engineering and Computer Science. Cataloged from PDF version of thesis.Includes bibliographical references (p. 177-188).Large volumes of continuous waveform data are now collected in hospitals. These datasets provide an opportunity to advance medical care, by capturing rare or subtle phenomena associated with specific medical conditions, and by providing fresh insights into disease dynamics over long time scales. We describe how progress in medicine can be accelerated through the use of sophisticated computational methods for the structured analysis of large multi-patient, multi-signal datasets. We propose two new approaches, morphologic variability (MV) and physiological symbolic analysis, for the analysis of continuous long-term signals. MV studies subtle micro-level variations in the shape of physiological signals over long periods. These variations, which are often widely considered to be noise, can contain important information about the state of the underlying system. Symbolic analysis studies the macro-level information in signals by abstracting them into symbolic sequences. Converting continuous waveforms into symbolic sequences facilitates the development of efficient algorithms to discover high risk patterns and patients who are outliers in a population. We apply our methods to the clinical challenge of identifying patients at high risk of cardiovascular mortality (almost 30% of all deaths worldwide each year). When evaluated on ECG data from over 4,500 patients, high MV was strongly associated with both cardiovascular death and sudden cardiac death. MV was a better predictor of these events than other ECG-based metrics. Furthermore, these results were independent of information in echocardiography, clinical characteristics, and biomarkers.(cont.) Our symbolic analysis techniques also identified groups of patients exhibiting a varying risk of adverse outcomes. One group, with a particular set of symbolic characteristics, showed a 23 fold increased risk of death in the months following a mild heart attack, while another exhibited a 5 fold increased risk of future heart attacks.by Zeeshan Hassan Syed.Ph.D

    Methodology for designing the fuzzy resolver for a radial distribution system fault locator

    Get PDF
    The Power System Automation Lab at Texas A&M University developed a fault location scheme that can be used for radial distribution systems. When a fault occurs, the scheme executes three stages. In the first stage, all data measurements and system information is gathered and processed into suitable formats. In the second stage, three fault location methods are used to assign possibility values to each line section of a feeder. In the last stage, a fuzzy resolver is used to aggregate the outputs of the three fault location methods and assign a final possibility value to each line section of a feeder. By aggregating the outputs of the three fault location methods, the fuzzy resolver aims to obtain a smaller subset of line sections as potential faulted sections than the individual fault location methods. Fuzzy aggregation operators are used to implement fuzzy resolvers. This dissertation reports on a methodology that was developed utilizing fuzzy aggregation operators in the fuzzy resolver. Three fuzzy aggregation operators, the min, OWA, and uninorm, and two objective functions were used to design the fuzzy resolver. The methodologies to design fuzzy resolvers with respect to a single objective function and with respect to two objective functions were presented. A detailed illustration of the design process was presented. Performance studies of designed fuzzy resolvers were also performed. In order to design and validate the fuzzy resolver methodology, data were needed. Due to the lack of real field data, simulating a distribution feeder was a feasible alternative to generate data. The IEEE 34 node test feeder was modeled. Time current characteristics (TCC) based protective devices were added to this feeder. Faults were simulated on this feeder to generate data. Based on the performance studies of designed fuzzy resolvers, the fuzzy resolver designed using the uninorm operator without weights is the first choice. For this fuzzy resolver, no optimal weights are needed. In addition, fuzzy resolvers using the min operator and OWA operator can be used to design fuzzy resolvers. For these two operators, the methodology for designing fuzzy resolvers with respect to two objective functions was the appropriate choice

    Soft Learning Vector Quantization and Clustering Algorithms Based on Mean-type Aggregation Operators

    No full text
    This paper presents a framework for developing soft learning vector quantization (LVQ) and clustering algorithms by minimizing reformulation functions based on aggregation operators. An axiomatic approach provides conditions for selecting the subset of all aggregation operators that lead to admissible reformulation functions. For mean-type aggregation operators, the construction of admissible reformulation functions reduces to the selection of admissible generator functions. Nonlinear generator functions result in a broad family of soft LVQ and clustering algorithms, which include fuzzy LVQ and clustering algorithms as special cases. The formulation considered in this paper also provides the basis for exploring the structure of the feature set by identifying outliers in the data. The procedure described in this paper for identifying outliers in the feature set is tested on a set of vowel data
    corecore