29,035 research outputs found

    An efficient upper approximation for conditional preference

    Get PDF
    The fundamental operation of dominance testing, i.e., determining if one alternative is preferred to another, is in general very hard for methods of reasoning with qualitative conditional preferences such as CP-nets and conditional preference theories (CP-theories). It is therefore natural to consider approximations of preference, and upper approximations are of particular interest, since they can be used within a constraint optimisation algorithm to find some of the optimal solutions. Upper approximations for preference in CP-theories have previously been suggested, but they require consistency, as well as strong acyclicity conditions on the variables. We define an upper approximation of conditional preference for which dominance checking is efficient, and which can be applied very generally for CP-theories

    Data-driven satisficing measure and ranking

    Full text link
    We propose an computational framework for real-time risk assessment and prioritizing for random outcomes without prior information on probability distributions. The basic model is built based on satisficing measure (SM) which yields a single index for risk comparison. Since SM is a dual representation for a family of risk measures, we consider problems constrained by general convex risk measures and specifically by Conditional value-at-risk. Starting from offline optimization, we apply sample average approximation technique and argue the convergence rate and validation of optimal solutions. In online stochastic optimization case, we develop primal-dual stochastic approximation algorithms respectively for general risk constrained problems, and derive their regret bounds. For both offline and online cases, we illustrate the relationship between risk ranking accuracy with sample size (or iterations).Comment: 26 Pages, 6 Figure

    A Global Analysis of Dark Matter Signals from 27 Dwarf Spheroidal Galaxies using 11 Years of Fermi-LAT Observations

    Get PDF
    We search for a dark matter signal in 11 years of Fermi-LAT gamma-ray data from 27 Milky Way dwarf spheroidal galaxies with spectroscopically measured JJ-factors. Our analysis includes uncertainties in JJ-factors and background normalisations and compares results from a Bayesian and a frequentist perspective. We revisit the dwarf spheroidal galaxy Reticulum II, confirming that the purported gamma-ray excess seen in Pass 7 data is much weaker in Pass 8, independently of the statistical approach adopted. We introduce for the first time posterior predictive distributions to quantify the probability of a dark matter detection from another dwarf galaxy given a tentative excess. A global analysis including all 27 dwarfs shows no indication for a signal in nine annihilation channels. We present stringent new Bayesian and frequentist upper limits on the dark matter cross section as a function of dark matter mass. The best-fit dark matter parameters associated with the Galactic Centre excess are excluded by at least 95% confidence level/posterior probability in the frequentist/Bayesian framework in all cases. However, from a Bayesian model comparison perspective, dark matter annihilation within the dwarfs is not strongly disfavoured compared to a background-only model. These results constitute the highest exposure analysis on the most complete sample of dwarfs to date. Posterior samples and likelihood maps from this study are publicly available.Comment: 27+5 pages, 10 figures. Version 2 corresponds to the Accepted Manuscript version of the JCAP article; the analysis has been updated to Pass 8 R3 data plus 4FGL catalogue, with one more year of data and more annihilation channels. Supplementary Material (tabulated limits, likelihoods, and posteriors) is available on Zenodo at https://doi.org/10.5281/zenodo.261226

    Portfolio selection models: A review and new directions

    Get PDF
    Modern Portfolio Theory (MPT) is based upon the classical Markowitz model which uses variance as a risk measure. A generalization of this approach leads to mean-risk models, in which a return distribution is characterized by the expected value of return (desired to be large) and a risk value (desired to be kept small). Portfolio choice is made by solving an optimization problem, in which the portfolio risk is minimized and a desired level of expected return is specified as a constraint. The need to penalize different undesirable aspects of the return distribution led to the proposal of alternative risk measures, notably those penalizing only the downside part (adverse) and not the upside (potential). The downside risk considerations constitute the basis of the Post Modern Portfolio Theory (PMPT). Examples of such risk measures are lower partial moments, Value-at-Risk (VaR) and Conditional Value-at-Risk (CVaR). We revisit these risk measures and the resulting mean-risk models. We discuss alternative models for portfolio selection, their choice criteria and the evolution of MPT to PMPT which incorporates: utility maximization and stochastic dominance

    The Complexity of Fairness through Equilibrium

    Full text link
    Competitive equilibrium with equal incomes (CEEI) is a well known fair allocation mechanism; however, for indivisible resources a CEEI may not exist. It was shown in [Budish '11] that in the case of indivisible resources there is always an allocation, called A-CEEI, that is approximately fair, approximately truthful, and approximately efficient, for some favorable approximation parameters. This approximation is used in practice to assign students to classes. In this paper we show that finding the A-CEEI allocation guaranteed to exist by Budish's theorem is PPAD-complete. We further show that finding an approximate equilibrium with better approximation guarantees is even harder: NP-complete.Comment: Appeared in EC 201
    corecore