5,483 research outputs found

    Comparisons of Heterogeneous Distributions and Dominance Criteria

    Get PDF
    We are interested in the comparisons of standard-of-living across societies when observations of both income and household structure are available. We generalise the approach of Atkinson and Bourguignon (1987) to the case where the marginal distributions of needs can vary across the household populations under comparison. We assume that a sympathetic observer uses a utilitarian social welfare function in order to rank heterogeneous income distributions. Insofar as any individual can play the role of the observer, we take the unanimity point of view according to which the planner’s judgements have to comply with a certain number of basic normative principles. We impose increasingly restrictive conditions on the household’s utility function and we investigate their effects on the resulting rankings of the distributions. This leads us to propose four dominance criteria that can be used for providing an unambiguous ranking of income distributions for heterogeneous populations.Normative Analysis, Utilitarianism, Welfarism, Multidimensional Inequality and Welfare, Bidimensional Stochastic Dominance, Inequality Reducing Transformations.

    Rationality and dynamic consistency under risk and uncertainty

    Get PDF
    For choice with deterministic consequences, the standard rationality hypothesis is ordinality - i.e., maximization of a weak preference ordering. For choice under risk (resp. uncertainty), preferences are assumed to be represented by the objectively (resp. subjectively) expected value of a von Neumann{Morgenstern utility function. For choice under risk, this implies a key independence axiom; under uncertainty, it implies some version of Savage's sure thing principle. This chapter investigates the extent to which ordinality, independence, and the sure thing principle can be derived from more fundamental axioms concerning behaviour in decision trees. Following Cubitt (1996), these principles include dynamic consistency, separability, and reduction of sequential choice, which can be derived in turn from one consequentialist hypothesis applied to continuation subtrees as well as entire decision trees. Examples of behavior violating these principles are also reviewed, as are possible explanations of why such violations are often observed in experiments

    Rough set and rule-based multicriteria decision aiding

    Get PDF
    The aim of multicriteria decision aiding is to give the decision maker a recommendation concerning a set of objects evaluated from multiple points of view called criteria. Since a rational decision maker acts with respect to his/her value system, in order to recommend the most-preferred decision, one must identify decision maker's preferences. In this paper, we focus on preference discovery from data concerning some past decisions of the decision maker. We consider the preference model in the form of a set of "if..., then..." decision rules discovered from the data by inductive learning. To structure the data prior to induction of rules, we use the Dominance-based Rough Set Approach (DRSA). DRSA is a methodology for reasoning about data, which handles ordinal evaluations of objects on considered criteria and monotonic relationships between these evaluations and the decision. We review applications of DRSA to a large variety of multicriteria decision problems

    CP-nets: A Tool for Representing and Reasoning withConditional Ceteris Paribus Preference Statements

    Full text link
    Information about user preferences plays a key role in automated decision making. In many domains it is desirable to assess such preferences in a qualitative rather than quantitative way. In this paper, we propose a qualitative graphical representation of preferences that reflects conditional dependence and independence of preference statements under a ceteris paribus (all else being equal) interpretation. Such a representation is often compact and arguably quite natural in many circumstances. We provide a formal semantics for this model, and describe how the structure of the network can be exploited in several inference tasks, such as determining whether one outcome dominates (is preferred to) another, ordering a set outcomes according to the preference relation, and constructing the best outcome subject to available evidence

    Discrete Mathematics and Symmetry

    Get PDF
    Some of the most beautiful studies in Mathematics are related to Symmetry and Geometry. For this reason, we select here some contributions about such aspects and Discrete Geometry. As we know, Symmetry in a system means invariance of its elements under conditions of transformations. When we consider network structures, symmetry means invariance of adjacency of nodes under the permutations of node set. The graph isomorphism is an equivalence relation on the set of graphs. Therefore, it partitions the class of all graphs into equivalence classes. The underlying idea of isomorphism is that some objects have the same structure if we omit the individual character of their components. A set of graphs isomorphic to each other is denominated as an isomorphism class of graphs. The automorphism of a graph will be an isomorphism from G onto itself. The family of all automorphisms of a graph G is a permutation group

    Formal design of data warehouse and OLAP systems : a dissertation presented in partial fulfilment of the requirements for the degree of Doctor of Philosophy in Information Systems at Massey University, Palmerston North, New Zealand

    Get PDF
    A data warehouse is a single data store, where data from multiple data sources is integrated for online business analytical processing (OLAP) of an entire organisation. The rationale being single and integrated is to ensure a consistent view of the organisational business performance independent from different angels of business perspectives. Due to its wide coverage of subjects, data warehouse design is a highly complex, lengthy and error-prone process. Furthermore, the business analytical tasks change over time, which results in changes in the requirements for the OLAP systems. Thus, data warehouse and OLAP systems are rather dynamic and the design process is continuous. In this thesis, we propose a method that is integrated, formal and application-tailored to overcome the complexity problem, deal with the system dynamics, improve the quality of the system and the chance of success. Our method comprises three important parts: the general ASMs method with types, the application tailored design framework for data warehouse and OLAP, and the schema integration method with a set of provably correct refinement rules. By using the ASM method, we are able to model both data and operations in a uniform conceptual framework, which enables us to design an integrated approach for data warehouse and OLAP design. The freedom given by the ASM method allows us to model the system at an abstract level that is easy to understand for both users and designers. More specifically, the language allows us to use the terms from the user domain not biased by the terms used in computer systems. The pseudo-code like transition rules, which gives the simplest form of operational semantics in ASMs, give the closeness to programming languages for designers to understand. Furthermore, these rules are rooted in mathematics to assist in improving the quality of the system design. By extending the ASMs with types, the modelling language is tailored for data warehouse with the terms that are well developed for data-intensive applications, which makes it easy to model the schema evolution as refinements in the dynamic data warehouse design. By providing the application-tailored design framework, we break down the design complexity by business processes (also called subjects in data warehousing) and design concerns. By designing the data warehouse by subjects, our method resembles Kimball's "bottom-up" approach. However, with the schema integration method, our method resolves the stovepipe issue of the approach. By building up a data warehouse iteratively in an integrated framework, our method not only results in an integrated data warehouse, but also resolves the issues of complexity and delayed ROI (Return On Investment) in Inmon's "top-down" approach. By dealing with the user change requests in the same way as new subjects, and modelling data and operations explicitly in a three-tier architecture, namely the data sources, the data warehouse and the OLAP (online Analytical Processing), our method facilitates dynamic design with system integrity. By introducing a notion of refinement specific to schema evolution, namely schema refinement, for capturing the notion of schema dominance in schema integration, we are able to build a set of correctness-proven refinement rules. By providing the set of refinement rules, we simplify the designers's work in correctness design verification. Nevertheless, we do not aim for a complete set due to the fact that there are many different ways for schema integration, and neither a prescribed way of integration to allow designer favored design. Furthermore, given its °exibility in the process, our method can be extended for new emerging design issues easily

    Preference reversals in judgment and choice

    Get PDF
    According to normative decision theory there exists a principle of procedure invariance which states that a decision maker's preference order should remain the same, independently of which response mode is used. For example, the decision maker should express the same preference independently of whether he or she has to judge or decide. Nevertheless, previous research in behavioral decision making has suggested that judgments and choices yield different preference orders in both the risky and the riskless domain. In the latter, the prominence effect has been demonstrated. The main purpose of the present series of experiments was to test cognitive explanations which account for the prominence effect. One of the explanations provided a psychological account based primarily on decision-strategy compatibility. Two other explanations built on information structuring approaches. In the first one, the general idea was that decision makers differentiate between alternatives by value and belief restructuring. In the second approach, violations of invariance were assumed to be attributed to the information structure of the task which in many cases demand problem simplification. A prominence effect was in most experiments found for both choices and preference ratings. This finding spöke against the strategy compatibility explanation. Instead, the different forms of cognitive restructuring provided a better account. However, none of these provided a single explanation. Yet, the structure compatibility explanation appeared to be the more viable one, in particular of the relation between experimentäl manipulations and response mode outcomes. The predictions of the value-belief restructuring explanation, on the other hand, seemed to be more valid for the prominence effect found in choice than for preference ratings

    REPRESENTING AND LEARNING PREFERENCES OVER COMBINATORIAL DOMAINS

    Get PDF
    Agents make decisions based on their preferences. Thus, to predict their decisions one has to learn the agent\u27s preferences. A key step in the learning process is selecting a model to represent those preferences. We studied this problem by borrowing techniques from the algorithm selection problem to analyze preference example sets and select the most appropriate preference representation for learning. We approached this problem in multiple steps. First, we determined which representations to consider. For this problem we developed the notion of preference representation language subsumption, which compares representations based on their expressive power. Subsumption creates a hierarchy of preference representations based solely on which preference orders they can express. By applying this analysis to preference representation languages over combinatorial domains we found that some languages are better for learning preference orders than others. Subsumption, however, does not tell the whole story. In the case of languages which approximate each other (another piece of useful information for learning) the subsumption relation cannot tell us which languages might serve as good approximations of others. How well one language approximates another often requires customized techniques. We developed such techniques for two important preference representation languages, conditional lexicographic preference models (CLPMs) and conditional preference networks (CP-nets). Second, we developed learning algorithms for highly expressive preference representations. To this end, we investigated using simulated annealing techniques to learn both ranking preference formulas (RPFs) and preference theories (PTs) preference programs. We demonstrated that simulated annealing is an effective approach to learn preferences under many different conditions. This suggested that more general learning strategies might lead to equally good or even better results. We studied this possibility by considering artificial neural networks (ANNs). Our research showed that ANNs can outperform classical models at deciding dominance, but have several significant drawbacks as preference reasoning models. Third, we developed a method for determining which representations match which example sets. For this classification task we considered two methods. In the first method we selected a series of features and used those features as input to a linear feed-forward ANN. The second method converts the example set into a graph and uses a graph convolutional neural network (GCNN). Between these two methods we found that the feature set approach works better. By completing these steps we have built the foundations of a portfolio based approach for learning preferences. We assembled a simple version of such a system as a proof of concept and tested its usefulness
    corecore