53,779 research outputs found

    Sorted-pareto dominance and qualitative notions of optimality

    Get PDF
    Pareto dominance is often used in decision making to compare decisions that have multiple preference values – however it can produce an unmanageably large number of Pareto optimal decisions. When preference value scales can be made commensurate, then the Sorted-Pareto relation produces a smaller, more manageable set of decisions that are still Pareto optimal. Sorted-Pareto relies only on qualitative or ordinal preference information, which can be easier to obtain than quantitative information. This leads to a partial order on the decisions, and in such partially-ordered settings, there can be many different natural notions of optimality. In this paper, we look at these natural notions of optimality, applied to the Sorted-Pareto and min-sum of weights case; the Sorted-Pareto ordering has a semantics in decision making under uncertainty, being consistent with any possible order-preserving function that maps an ordinal scale to a numerical one. We show that these optimality classes and the relationships between them provide a meaningful way to categorise optimal decisions for presenting to a decision maker

    Automated negotiation with Gaussian process-based utility models

    Get PDF
    Designing agents that can efficiently learn and integrate user's preferences into decision making processes is a key challenge in automated negotiation. While accurate knowledge of user preferences is highly desirable, eliciting the necessary information might be rather costly, since frequent user interactions may cause inconvenience. Therefore, efficient elicitation strategies (minimizing elicitation costs) for inferring relevant information are critical. We introduce a stochastic, inverse-ranking utility model compatible with the Gaussian Process preference learning framework and integrate it into a (belief) Markov Decision Process paradigm which formalizes automated negotiation processes with incomplete information. Our utility model, which naturally maps ordinal preferences (inferred from the user) into (random) utility values (with the randomness reflecting the underlying uncertainty), provides the basic quantitative modeling ingredient for automated (agent-based) negotiation

    Vive la Différence? Structural Diversity as a Challenge for Metanormative Theories

    Get PDF
    Decision-making under normative uncertainty requires an agent to aggregate the assessments of options given by rival normative theories into a single assessment that tells her what to do in light of her uncertainty. But what if the assessments of rival theories differ not just in their content but in their structure -- e.g., some are merely ordinal while others are cardinal? This paper describes and evaluates three general approaches to this "problem of structural diversity": structural enrichment, structural depletion, and multi-stage aggregation. All three approaches have notable drawbacks, but I tentatively defend multi-stage aggregation as least bad of the three

    A new method for determining physician decision thresholds using empiric, uncertain recommendations

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>The concept of risk thresholds has been studied in medical decision making for over 30 years. During that time, physicians have been shown to be poor at estimating the probabilities required to use this method. To better assess physician risk thresholds and to more closely model medical decision making, we set out to design and test a method that derives thresholds from actual physician treatment recommendations. Such an approach would avoid the need to ask physicians for estimates of patient risk when trying to determine individual thresholds for treatment. Assessments of physician decision making are increasingly relevant as new data are generated from clinical research. For example, recommendations made in the setting of ocular hypertension are of interest as a large clinical trial has identified new risk factors that should be considered by physicians. Precisely how physicians use this new information when making treatment recommendations has not yet been determined.</p> <p>Results</p> <p>We derived a new method for estimating treatment thresholds using ordinal logistic regression and tested it by asking ophthalmologists to review cases of ocular hypertension before expressing how likely they would be to recommend treatment. Fifty-eight physicians were recruited from the American Glaucoma Society. Demographic information was collected from the participating physicians and the treatment threshold for each physician was estimated. The method was validated by showing that while treatment thresholds varied over a wide range, the most common values were consistent with the 10-15% 5-year risk of glaucoma suggested by expert opinion and decision analysis.</p> <p>Conclusions</p> <p>This method has advantages over prior means of assessing treatment thresholds. It does not require physicians to explicitly estimate patient risk and it allows for uncertainty in the recommendations. These advantages will make it possible to use this method when assessing interventions intended to alter clinical decision making.</p

    Development of accident prediction model by using artificial neural network (ANN)

    Get PDF
    Statistical or crash prediction model have frequently been used in highway safety studies. They can be used in identify major contributing factors or establish relationship between crashes and explanatory accident variables. The measurements to prevent accident are from the speed reduction, widening the roads, speed enforcement, or construct the road divider, or other else. Therefore, the purpose of this study is to develop an accident prediction model at federal road FT 050 Batu Pahat to Kluang. The study process involves the identification of accident blackspot locations, establishment of general patterns of accident, analysis of the factors involved, site studies, and development of accident prediction model using Artificial Neural Network (ANN) applied software which named NeuroShell2. The significant of the variables that are selected from these accident factors are checked to ensure the developed model can give a good prediction results. The performance of neural network is evaluated by using the Mean Absolute Percentage Error (MAPE). The study result showed that the best neural network for accident prediction model at federal road FT 050 is 4-10-1 with 0.1 learning rate and 0.2 momentum rate. This network model contains the lowest value of MAPE and highest value of linear correlation, r which is 0.8986. This study has established the accident point weightage as the rank of the blackspot section by kilometer along the FT 050 road (km 1 – km 103). Several main accident factors also have been determined along this road, and after all the data gained, it has successfully analyzed by using artificial neural network

    Pembangunan dan penilaian modul berbantukan komputer bagi subjek pemasaran : Politeknik Port Dickson

    Get PDF
    Kajian ini bertujuan membangunkan Modul Berbantukan Komputer (MBK) bagi subjek Pemasaran. MBK ini dibangunkan dengan menggunakan pensian AutoPlay Media dan Flash MX. Sampel kajian ini terdiri daripada 30 orang pelajar Diploma Pemasaran di Politeknik Port Dickson. Data dikumpulkan melalui kaedah soal selidik dan dianalisis berdasarkan kekerpan, peratusan dan skor min dengan menggunakan perisian Statistical Package For Social Sciene (SPSS) versi 11.0. Dapatan kajian menunjukkan penilaian terhadap pembagunan MBK di dalam proses P&P adalah tinggi. Ini bermakna MBK ini sesuai digunakan di Politeknik Port Dickson di dalam proses P&P

    Heuristic Voting as Ordinal Dominance Strategies

    Full text link
    Decision making under uncertainty is a key component of many AI settings, and in particular of voting scenarios where strategic agents are trying to reach a joint decision. The common approach to handle uncertainty is by maximizing expected utility, which requires a cardinal utility function as well as detailed probabilistic information. However, often such probabilities are not easy to estimate or apply. To this end, we present a framework that allows "shades of gray" of likelihood without probabilities. Specifically, we create a hierarchy of sets of world states based on a prospective poll, with inner sets contain more likely outcomes. This hierarchy of likelihoods allows us to define what we term ordinally-dominated strategies. We use this approach to justify various known voting heuristics as bounded-rational strategies.Comment: This is the full version of paper #6080 accepted to AAAI'1

    Dominance Measuring Approach using Stochastic Weights

    Full text link
    In this paper we propose an approach to obtain a ranking of alternatives in multicriteria decision-making problems when there is imprecision concerning the alternative performances, component utility functions and weights. We assume decision maker's preferences are represented by an additive multi-attribute utility function, in which weights are modeled by independent normal variables, the performance in each attribute for each alternative is an interval value and classes of utility functions are available for each attribute. The approach we propose is based on dominance measures, which are computed in a similar way that when the imprecision concerning weights is modeled by uniform distributions or by an ordinal relation. In this paper we will show how the approach can be applied when the imprecision concerning weights are represented by normal distributions. Extensions to other distributions, such as truncated normal or beta, can be feasible using Monte Carlo simulation techniques
    • 

    corecore