4,122 research outputs found

    A Pairwise Comparison Matrix Framework for Large-Scale Decision Making

    Get PDF
    abstract: A Pairwise Comparison Matrix (PCM) is used to compute for relative priorities of criteria or alternatives and are integral components of widely applied decision making tools: the Analytic Hierarchy Process (AHP) and its generalized form, the Analytic Network Process (ANP). However, a PCM suffers from several issues limiting its application to large-scale decision problems, specifically: (1) to the curse of dimensionality, that is, a large number of pairwise comparisons need to be elicited from a decision maker (DM), (2) inconsistent and (3) imprecise preferences maybe obtained due to the limited cognitive power of DMs. This dissertation proposes a PCM Framework for Large-Scale Decisions to address these limitations in three phases as follows. The first phase proposes a binary integer program (BIP) to intelligently decompose a PCM into several mutually exclusive subsets using interdependence scores. As a result, the number of pairwise comparisons is reduced and the consistency of the PCM is improved. Since the subsets are disjoint, the most independent pivot element is identified to connect all subsets. This is done to derive the global weights of the elements from the original PCM. The proposed BIP is applied to both AHP and ANP methodologies. However, it is noted that the optimal number of subsets is provided subjectively by the DM and hence is subject to biases and judgement errors. The second phase proposes a trade-off PCM decomposition methodology to decompose a PCM into a number of optimally identified subsets. A BIP is proposed to balance the: (1) time savings by reducing pairwise comparisons, the level of PCM inconsistency, and (2) the accuracy of the weights. The proposed methodology is applied to the AHP to demonstrate its advantages and is compared to established methodologies. In the third phase, a beta distribution is proposed to generalize a wide variety of imprecise pairwise comparison distributions via a method of moments methodology. A Non-Linear Programming model is then developed that calculates PCM element weights which maximizes the preferences of the DM as well as minimizes the inconsistency simultaneously. Comparison experiments are conducted using datasets collected from literature to validate the proposed methodology.Dissertation/ThesisPh.D. Industrial Engineering 201

    PERFORMANCE EVALUATION AND REVIEW FRAMEWORK OF ROBOTIC MISSIONS (PERFORM): AUTONOMOUS PATH PLANNING AND AUTONOMY PERFORMANCE EVALUATION

    Get PDF
    The scope of this work spans two main areas of autonomy research 1) autonomous path planning and 2) test and evaluation of autonomous systems. Path planning is an integral part of autonomous decision-making, and a deep understanding in this area provides valuable perspective on approaching the problem of how to effectively evaluate vehicle behavior. Autonomous decision-making capabilities must include reliability, robustness, and trustworthiness in a real-world environment. A major component of robot decision-making lies in intelligent path-planning. Serving as the brains of an autonomous system, an efficient and reliable path planner is crucial to mission success and overall safety. A hybrid global and local planner is implemented using a combination of the Potential Field Method (PFM) and A-star (A*) algorithms. Created using a layered vector field strategy, this allows for flexibility along with the ability to add and remove layers to take into account other parameters such as currents, wind, dynamics, and the International Regulations for Preventing Collisions at Sea (COLGREGS). Different weights can be attributed to each layer based on the determined level of importance in a hierarchical manner. Different obstacle scenarios are shown in simulation, and proof-of-concept validation of the path-planning algorithms on an actual ASV is accomplished in an indoor environment. Results show that the combination of PFM and A* complement each other to generate a successfully planned path to goal that alleviates local minima and entrapment issues. Additionally, the planner demonstrates the ability to update for new obstacles in real time using an obstacle detection sensor. Regarding test and evaluation of autonomous vehicles, trust and confidence in autonomous behavior is required to send autonomous vehicles into operational missions. The author introduces the Performance Evaluation and Review Framework Of Robotic Missions (PERFORM), a framework for which to enable a rigorous and replicable autonomy test environment, thereby filling the void between that of merely simulating autonomy and that of completing true field missions. A generic architecture for defining the missions under test is proposed and a unique Interval Type-2 Fuzzy Logic approach is used as the foundation for the mathematically rigorous autonomy evaluation framework. The test environment is designed to aid in (1) new technology development (i.e. providing direct comparisons and quantitative evaluations of varying autonomy algorithms), (2) the validation of the performance of specific autonomous platforms, and (3) the selection of the appropriate robotic platform(s) for a given mission type (e.g. for surveying, surveillance, search and rescue). Several case studies are presented to apply the metric to various test scenarios. Results demonstrate the flexibility of the technique with the ability to tailor tests to the user’s design requirements accounting for different priorities related to acceptable risks and goals of a given mission

    Optimization and inference under fuzzy numerical constraints

    Get PDF
    Εκτεταμένη έρευνα έχει γίνει στους τομείς της Ικανοποίησης Περιορισμών με διακριτά (ακέραια) ή πραγματικά πεδία τιμών. Αυτή η έρευνα έχει οδηγήσει σε πολλαπλές σημασιολογικές περιγραφές, πλατφόρμες και συστήματα για την περιγραφή σχετικών προβλημάτων με επαρκείς βελτιστοποιήσεις. Παρά ταύτα, λόγω της ασαφούς φύσης πραγματικών προβλημάτων ή ελλιπούς μας γνώσης για αυτά, η σαφής μοντελοποίηση ενός προβλήματος ικανοποίησης περιορισμών δεν είναι πάντα ένα εύκολο ζήτημα ή ακόμα και η καλύτερη προσέγγιση. Επιπλέον, το πρόβλημα της μοντελοποίησης και επίλυσης ελλιπούς γνώσης είναι ακόμη δυσκολότερο. Επιπροσθέτως, πρακτικές απαιτήσεις μοντελοποίησης και μέθοδοι βελτιστοποίησης του χρόνου αναζήτησης απαιτούν συνήθως ειδικές πληροφορίες για το πεδίο εφαρμογής, καθιστώντας τη δημιουργία ενός γενικότερου πλαισίου βελτιστοποίησης ένα ιδιαίτερα δύσκολο πρόβλημα. Στα πλαίσια αυτής της εργασίας θα μελετήσουμε το πρόβλημα της μοντελοποίησης και αξιοποίησης σαφών, ελλιπών ή ασαφών περιορισμών, καθώς και πιθανές στρατηγικές βελτιστοποίησης. Καθώς τα παραδοσιακά προβλήματα ικανοποίησης περιορισμών λειτουργούν βάσει συγκεκριμένων και προκαθορισμένων κανόνων και σχέσεων, παρουσιάζει ενδιαφέρον η διερεύνηση στρατηγικών και βελτιστοποιήσεων που θα επιτρέπουν το συμπερασμό νέων ή/και αποδοτικότερων περιορισμών. Τέτοιοι επιπρόσθετοι κανόνες θα μπορούσαν να βελτιώσουν τη διαδικασία αναζήτησης μέσω της εφαρμογής αυστηρότερων περιορισμών και περιορισμού του χώρου αναζήτησης ή να προσφέρουν χρήσιμες πληροφορίες στον αναλυτή για τη φύση του προβλήματος που μοντελοποιεί.Extensive research has been done in the areas of Constraint Satisfaction with discrete/integer and real domain ranges. Multiple platforms and systems to deal with these kinds of domains have been developed and appropriately optimized. Nevertheless, due to the incomplete and possibly vague nature of real-life problems, modeling a crisp and adequately strict satisfaction problem may not always be easy or even appropriate. The problem of modeling incomplete knowledge or solving an incomplete/relaxed representation of a problem is a much harder issue to tackle. Additionally, practical modeling requirements and search optimizations require specific domain knowledge in order to be implemented, making the creation of a more generic optimization framework an even harder problem.In this thesis, we will study the problem of modeling and utilizing incomplete and fuzzy constraints, as well as possible optimization strategies. As constraint satisfaction problems usually contain hard-coded constraints based on specific problem and domain knowledge, we will investigate whether strategies and generic heuristics exist for inferring new constraint rules. Additional rules could optimize the search process by implementing stricter constraints and thus pruning the search space or even provide useful insight to the researcher concerning the nature of the investigated problem

    Working Notes from the 1992 AAAI Spring Symposium on Practical Approaches to Scheduling and Planning

    Get PDF
    The symposium presented issues involved in the development of scheduling systems that can deal with resource and time limitations. To qualify, a system must be implemented and tested to some degree on non-trivial problems (ideally, on real-world problems). However, a system need not be fully deployed to qualify. Systems that schedule actions in terms of metric time constraints typically represent and reason about an external numeric clock or calendar and can be contrasted with those systems that represent time purely symbolically. The following topics are discussed: integrating planning and scheduling; integrating symbolic goals and numerical utilities; managing uncertainty; incremental rescheduling; managing limited computation time; anytime scheduling and planning algorithms, systems; dependency analysis and schedule reuse; management of schedule and plan execution; and incorporation of discrete event techniques

    Meta-learning computational intelligence architectures

    Get PDF
    In computational intelligence, the term \u27memetic algorithm\u27 has come to be associated with the algorithmic pairing of a global search method with a local search method. In a sociological context, a \u27meme\u27 has been loosely defined as a unit of cultural information, the social analog of genes for individuals. Both of these definitions are inadequate, as \u27memetic algorithm\u27 is too specific, and ultimately a misnomer, as much as a \u27meme\u27 is defined too generally to be of scientific use. In this dissertation the notion of memes and meta-learning is extended from a computational viewpoint and the purpose, definitions, design guidelines and architecture for effective meta-learning are explored. The background and structure of meta-learning architectures is discussed, incorporating viewpoints from psychology, sociology, computational intelligence, and engineering. The benefits and limitations of meme-based learning are demonstrated through two experimental case studies -- Meta-Learning Genetic Programming and Meta- Learning Traveling Salesman Problem Optimization. Additionally, the development and properties of several new algorithms are detailed, inspired by the previous case-studies. With applications ranging from cognitive science to machine learning, meta-learning has the potential to provide much-needed stimulation to the field of computational intelligence by providing a framework for higher order learning --Abstract, page iii

    Soft Computing

    Get PDF
    Soft computing is used where a complex problem is not adequately specified for the use of conventional math and computer techniques. Soft computing has numerous real-world applications in domestic, commercial and industrial situations. This book elaborates on the most recent applications in various fields of engineering

    Efficient Data Driven Multi Source Fusion

    Get PDF
    Data/information fusion is an integral component of many existing and emerging applications; e.g., remote sensing, smart cars, Internet of Things (IoT), and Big Data, to name a few. While fusion aims to achieve better results than what any one individual input can provide, often the challenge is to determine the underlying mathematics for aggregation suitable for an application. In this dissertation, I focus on the following three aspects of aggregation: (i) efficient data-driven learning and optimization, (ii) extensions and new aggregation methods, and (iii) feature and decision level fusion for machine learning with applications to signal and image processing. The Choquet integral (ChI), a powerful nonlinear aggregation operator, is a parametric way (with respect to the fuzzy measure (FM)) to generate a wealth of aggregation operators. The FM has 2N variables and N(2N − 1) constraints for N inputs. As a result, learning the ChI parameters from data quickly becomes impractical for most applications. Herein, I propose a scalable learning procedure (which is linear with respect to training sample size) for the ChI that identifies and optimizes only data-supported variables. As such, the computational complexity of the learning algorithm is proportional to the complexity of the solver used. This method also includes an imputation framework to obtain scalar values for data-unsupported (aka missing) variables and a compression algorithm (lossy or losselss) of the learned variables. I also propose a genetic algorithm (GA) to optimize the ChI for non-convex, multi-modal, and/or analytical objective functions. This algorithm introduces two operators that automatically preserve the constraints; therefore there is no need to explicitly enforce the constraints as is required by traditional GA algorithms. In addition, this algorithm provides an efficient representation of the search space with the minimal set of vertices. Furthermore, I study different strategies for extending the fuzzy integral for missing data and I propose a GOAL programming framework to aggregate inputs from heterogeneous sources for the ChI learning. Last, my work in remote sensing involves visual clustering based band group selection and Lp-norm multiple kernel learning based feature level fusion in hyperspectral image processing to enhance pixel level classification

    An Integrated Fuzzy MCDM Hybrid Methodology to Analyze Agricultural Production

    Get PDF
    A hybrid model was developed by combining multiple-criteria decision-making (MCDM) with the analytic hierarchy process (AHP) and a fuzzy set to give decision support for choosing sustainable solutions to agricultural problems. Six steps were taken to build the suggested hybrid model: identifying and weighing criteria; normalizing data using fuzzy membership functions; calculating the weighting of the criteria using AHP; and selecting the best alternative for the agricultural problem. The objective of this case study is to demonstrate how agricultural production techniques (APTs) are becoming more complex as agricultural production becomes more complex. Organic agriculture aims to protect both the environment and consumer satisfaction by utilizing organic management practices that do not have the negative effects associated with conventional and genetic engineering production. Meanwhile, products obtained through conventional and genetic engineering techniques are more cost-effective. To present the superiority of the proposed fuzzy MCDM hybrid model, this problem is used as the causative agent’s dataset. Because the challenge involves a large number of competing quantitative and qualitative criteria, the assessment approach should improve the ratio of input data to output data. As a result, agricultural productivity should be controlled holistically. However, because the problem may contain both qualitative and quantitative facts and uncertainties, it is necessary to represent the uncertainty inherent in human thinking. To achieve superior outcomes, fuzzy set theory (FST), which enables the expression of uncertainty in human judgments, can be integrated with). The purpose of this study is to present a novel MCDM approach based on fuzzy numbers for analyzing decision-making scenarios. The proposed methodology, which is based on Buckley’s fuzzy analytic hierarchy process (B-FAHP) and the Fuzzy Technique for Order of Preference by Similarity to Ideal Solution (F-TOPSIS), uses Buckley’s fuzzy analytic hierarchy process (B-FAHP) and fuzzy TOPSIS to determine weights and rank alternatives, respectively. As a result, we attempted to include both the uncertainty and hesitancy of experts in the decision-making process through the use of fuzzy numbers. We have three main criteria in this study: Satisfaction (C1), Economy (C2), and Environment (C3). An important objective of the current research is to build a complete framework for evaluating and grading the suitability of technologies. A real-world case study is used to demonstrate the suggested paradigm’s validity. © 2022 by the authors. Licensee MDPI, Basel, Switzerland
    corecore