2,626 research outputs found

    CP-nets: From Theory to Practice

    Get PDF
    Conditional preference networks (CP-nets) exploit the power of ceteris paribus rules to represent preferences over combinatorial decision domains compactly. CP-nets have much appeal. However, their study has not yet advanced sufficiently for their widespread use in real-world applications. Known algorithms for deciding dominance---whether one outcome is better than another with respect to a CP-net---require exponential time. Data for CP-nets are difficult to obtain: human subjects data over combinatorial domains are not readily available, and earlier work on random generation is also problematic. Also, much of the research on CP-nets makes strong, often unrealistic assumptions, such as that decision variables must be binary or that only strict preferences are permitted. In this thesis, I address such limitations to make CP-nets more useful. I show how: to generate CP-nets uniformly randomly; to limit search depth in dominance testing given expectations about sets of CP-nets; and to use local search for learning restricted classes of CP-nets from choice data

    Learning Conditional Preference Networks from Optimal Choices

    Get PDF
    Conditional preference networks (CP-nets) model user preferences over objects described in terms of values assigned to discrete features, where the preference for one feature may depend on the values of other features. Most existing algorithms for learning CP-nets from the user\u27s choices assume that the user chooses between pairs of objects. However, many real-world applications involve the the user choosing from all combinatorial possibilities or a very large subset. We introduce a CP-net learning algorithm for the latter type of choice, and study its properties formally and empirically

    REPRESENTING AND LEARNING PREFERENCES OVER COMBINATORIAL DOMAINS

    Get PDF
    Agents make decisions based on their preferences. Thus, to predict their decisions one has to learn the agent\u27s preferences. A key step in the learning process is selecting a model to represent those preferences. We studied this problem by borrowing techniques from the algorithm selection problem to analyze preference example sets and select the most appropriate preference representation for learning. We approached this problem in multiple steps. First, we determined which representations to consider. For this problem we developed the notion of preference representation language subsumption, which compares representations based on their expressive power. Subsumption creates a hierarchy of preference representations based solely on which preference orders they can express. By applying this analysis to preference representation languages over combinatorial domains we found that some languages are better for learning preference orders than others. Subsumption, however, does not tell the whole story. In the case of languages which approximate each other (another piece of useful information for learning) the subsumption relation cannot tell us which languages might serve as good approximations of others. How well one language approximates another often requires customized techniques. We developed such techniques for two important preference representation languages, conditional lexicographic preference models (CLPMs) and conditional preference networks (CP-nets). Second, we developed learning algorithms for highly expressive preference representations. To this end, we investigated using simulated annealing techniques to learn both ranking preference formulas (RPFs) and preference theories (PTs) preference programs. We demonstrated that simulated annealing is an effective approach to learn preferences under many different conditions. This suggested that more general learning strategies might lead to equally good or even better results. We studied this possibility by considering artificial neural networks (ANNs). Our research showed that ANNs can outperform classical models at deciding dominance, but have several significant drawbacks as preference reasoning models. Third, we developed a method for determining which representations match which example sets. For this classification task we considered two methods. In the first method we selected a series of features and used those features as input to a linear feed-forward ANN. The second method converts the example set into a graph and uses a graph convolutional neural network (GCNN). Between these two methods we found that the feature set approach works better. By completing these steps we have built the foundations of a portfolio based approach for learning preferences. We assembled a simple version of such a system as a proof of concept and tested its usefulness

    Representing and reasoning with qualitative preferences for compositional systems

    Get PDF
    Many applications call for techniques for representing and reasoning about preferences, i.e., relative desirability over a set of alternatives. Preferences over the alternatives are typically derived from preferences with respect to the various attributes of the alternatives (e.g., a student\u27s preference for one course over another may be influenced by his preference for the topic, the time of the day when the course is offered, etc.). Such preferences are often qualitative and conditional. When the alternatives are expressed as tuples of valuations of the relevant attributes, preferences between alternatives can often be expressed in the form of (a) preferences over the values of each attribute, and (b) relative importance of certain attributes over others. An important problem in reasoning with multi-attribute qualitative preferences is dominance testing, i.e., to find if one alternative (assignment to all attributes) is preferred over another. This problem is hard (PSPACE-complete) in general for well known qualitative conditional preference languages such as TCP-nets. We provide two practical approaches to dominance testing. First, we study a restricted unconditional preference language, and provide a dominance relation that can be computed in polynomial time by evaluating the satisfiability of an appropriately constructed logic formula. Second, we show how to reduce dominance testing for TCP-nets to reachability analysis in an induced preference graph. We provide an encoding of TCP-nets in the form of a Kripke structure for CTL. We show how to compute dominance using NuSMV, a model checker for CTL. We address the problem of identifying a preferred outcome in a setting where the outcomes or alternatives to be compared are composite in nature (i.e., collections of components that satisfy certain functional requirements). We define a dominance relation that allows us to compare collections of objects in terms of preferences over attributes of the objects that make up the collection, and show that the dominance relation is a strict partial order under certain conditions. We provide algorithms that use this dominance relation to identify only (sound), all (complete), or at least one (weakly complete) of the most preferred collections. We establish some key properties of the dominance relation and analyze the quality of solutions produced by the algorithms. We present results of simulation experiments aimed at comparing the algorithms, and report interesting conjectures and results that were derived from our analysis. Finally, we show how the above formalism and algorithms can be used in preference-based service composition, substitution, and adaptation

    Conditional preference networks: efficient dominance testing and learning

    Get PDF
    Modelling and reasoning about preference is necessary for applications such as recommendation and decision support systems. Such systems are becoming increasingly prevalent in all aspects of our daily lives as technology advances. Thus, preference representation is a wide area of interest within the Artificial Intelligence community. Conditional preference networks, or CP-nets, are one of the most popular models for representing a person's preference structure. In this thesis, we address two issues with this model that make it difficult to utilise in practice. First, answering dominance queries efficiently. Dominance queries ask for the relative preference between a given pair of outcomes. Such queries are natural and essential for effectively reasoning about a person's preferences. However, they are complex to answer given a CP-net representation of preference. Second, learning a person's CP-net from observational data. In order to utilise a CP-net representation of a person's preferences, we must first determine the correct model. As direct elicitation is not always possible or practical, we must be able to learn CP-nets passively from the data we can observe. We provide two distinct methods of improving dominance testing efficiency for CP-nets. The first utilises a quantitative representation of preference in order to prune the associated search tree. The second reduces the size of a dominance testing problem by preprocessing the CP-net. Both methods are shown experimentally to significantly improve dominance testing efficiency. Furthermore, both are shown to outperform existing methods. These techniques can be combined with one another, and with the existing methods, in order to further improve efficiency. We also introduce a new, score-based learning technique for CP-nets. Most existing work on CP-net learning uses pairwise outcome preferences as data. However, such preferences are often impossible to observe passively from user actions, particularly in online settings, where users typically choose from a variety of options. Contrastingly, our method assumes a history of user choices as data, which is observable in a wide variety of contexts. Experimental evaluation of this method finds that the learned CP-nets show high levels of agreement with the true preference structures and with previously unseen (future) data

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    Coding Strategies for Genetic Algorithms and Neural Nets

    Get PDF
    The interaction between coding and learning rules in neural nets (NNs), and between coding and genetic operators in genetic algorithms (GAs) is discussed. The underlying principle advocated is that similar things in "the world" should have similar codes. Similarity metrics are suggested for the coding of images and numerical quantities in neural nets, and for the coding of neural network structures in genetic algorithms. A principal component analysis of natural images yields receptive fields resembling horizontal and vertical edge and bar detectors. The orientation sensitivity of the "bar detector" components is found to match a psychophysical model, suggesting that the brain may make some use of principal components in its visual processing. Experiments are reported on the effects of different input and output codings on the accuracy of neural nets handling numeric data. It is found that simple analogue and interpolation codes are most successful. Experiments on the coding of image data demonstrate the sensitivity of final performance to the internal structure of the net. The interaction between the coding of the target problem and reproduction operators of mutation and recombination in GAs are discussed and illustrated. The possibilities for using GAs to adapt aspects of NNs are considered. The permutation problem, which affects attempts to use GAs both to train net weights and adapt net structures, is illustrated and methods to reduce it suggested. Empirical tests using a simulated net design problem to reduce evaluation times indicate that the permutation problem may not be as severe as has been thought, but suggest the utility of a sorting recombination operator, that matches hidden units according to the number of connections they have in common. A number of experiments using GAs to design network structures are reported, both to specify a net to be trained from random weights, and to prune a pre-trained net. Three different coding methods are tried, and various sorting recombination operators evaluated. The results indicate that appropriate sorting can be beneficial, but the effects are problem-dependent. It is shown that the GA tends to overfit the net to the particular set of test criteria, to the possible detriment of wider generalisation ability. A method of testing the ability of a GA to make progress in the presence of noise, by adding a penalty flag, is described
    corecore