22 research outputs found

    Optimizing Multiple Simultaneous Objectives for Voting and Facility Location

    Full text link
    We study the classic facility location setting, where we are given nn clients and mm possible facility locations in some arbitrary metric space, and want to choose a location to build a facility. The exact same setting also arises in spatial social choice, where voters are the clients and the goal is to choose a candidate or outcome, with the distance from a voter to an outcome representing the cost of this outcome for the voter (e.g., based on their ideological differences). Unlike most previous work, we do not focus on a single objective to optimize (e.g., the total distance from clients to the facility, or the maximum distance, etc.), but instead attempt to optimize several different objectives simultaneously. More specifically, we consider the ll-centrum family of objectives, which includes the total distance, max distance, and many others. We present tight bounds on how well any pair of such objectives (e.g., max and sum) can be simultaneously approximated compared to their optimum outcomes. In particular, we show that for any such pair of objectives, it is always possible to choose an outcome which simultaneously approximates both objectives within a factor of 1+21+\sqrt{2}, and give a precise characterization of how this factor improves as the two objectives being optimized become more similar. For q>2q>2 different centrum objectives, we show that it is always possible to approximate all qq of these objectives within a small constant, and that this constant approaches 3 as qq\rightarrow \infty. Our results show that when optimizing only a few simultaneous objectives, it is always possible to form an outcome which is a significantly better than 3 approximation for all of these objectives.Comment: To be published in the Proceedings of 37th Conference on Artificial Intelligence (AAAI 2023

    The Price of Fairness

    Get PDF
    In this paper we study resource allocation problems that involve multiple self-interested parties or players and a central decision maker. We introduce and study the price of fairness, which is the relative system efficiency loss under a “fair” allocation assuming that a fully efficient allocation is one that maximizes the sum of player utilities. We focus on two well-accepted, axiomatically justified notions of fairness, viz., proportional fairness and max-min fairness. For these notions we provide a tight characterization of the price of fairness for a broad family of problems.National Science Foundation (U.S.) (grant DMI- 0556106)National Science Foundation (U.S.) (grant EFRI-0735905

    Approximate Tradeoffs on Matroids

    Get PDF
    International audienceWe consider problems where a solution is evaluated with a couple. Each coordinate of this couple represents an agent’s utility. Due to the possible conflicts, it is unlikely that one feasible solution is optimal for both agents. Then, a natural aim is to find tradeoffs. We investigate tradeoff solutions with guarantees for the agents.The focus is on discrete problems having a matroid structure. We provide polynomial-time deterministic algorithms which achieve several guarantees and we prove that some guarantees are not possible to reach

    Three essays in financial econometrics

    Get PDF
    Sparse Weighted Norm Minimum Variance Portfolio. In this paper, I propose a weighted L1 and squared L2 norm penalty in portfolio optimization to improve the portfolio performance as the number of available assets N goes large. I show that under certain conditions, the realized risk of the portfolio obtained from this strategy will asymptotically be less than that of some benchmark portfolios with high probability. An intuitive interpretation for why including a fewer number of assets may be beneficial in the high dimensional situation is built on a constraint between sparsity of the optimal weight vector and the realized risk. The theoretical results also imply that the penalty parameters for the weighted norm penalty can be specified as a function of N and sample size n. An efficient coordinate-wise descent type algorithm is then introduced to solve the penalized weighted norm portfolio optimization problem. I find performances of the weighted norm strategy dominate other benchmarks for the case of Fama-French 100 size and book to market ratio portfolios, but are mixed for the case of individual stocks. Several novel alternative penalties are also proposed, and their performances are shown to be comparable to the weighted norm strategy. Bond Variance Risk Premia (Joint work with Philippe Mueller and Andrea Vedolin). Using data from 1983 to 2010, we propose a new fear measure for Treasury markets, akin to the VIX for equities, labeled TIV. We show that TIV explains one third of the time variation in funding liquidity and that the spread between the VIX and TIV captures flight to quality. We then construct Treasury bond variance risk premia as the difference between the implied variance and an expected variance estimate using autoregressive models. Bond variance risk premia display pronounced spikes during crisis periods. We show that variance risk premia encompass a broad spectrum of macroeconomic uncertainty. Uncertainty about the nominal and the real side of the economy increase variance risk premia but uncertainty about monetary policy has a strongly negative effect. We document that bond variance risk premia predict excess returns on Treasuries, stocks, corporate bonds and mortgage-backed securities, both in-sample and out-of-sample. Furthermore, this predictability is not subsumed by other standard predictors. Testing Jumps via False Discovery Rate Control. Many recently developed nonparametric jump tests can be viewed as multiple hypothesis testing problems. For such multiple hypothesis tests, it is well known that controlling type I error often unavoidably makes a large proportion of erroneous rejections, and such situation becomes even worse when the jump occurrence is a rare event. To obtain more reliable results, we aim to control the false discovery rate (FDR), an efficient compound error measure for erroneous rejections in multiple testing problems. We perform the test via a nonparametric statistic proposed by Barndorff-Nielsen and Shephard (2006), and control the FDR with a procedure proposed by Benjamini and Hochberg (1995). We provide asymptotical results for the FDR control. From simulations, we examine relevant theoretical results and demonstrate the advantages of controlling FDR. The hybrid approach is then applied to empirical analysis on two benchmark stock indices with high frequency data

    "Rotterdam econometrics": publications of the econometric institute 1956-2005

    Get PDF
    This paper contains a list of all publications over the period 1956-2005, as reported in the Rotterdam Econometric Institute Reprint series during 1957-2005

    Journal of Telecommunications and Information Technology, 2003, nr 3

    Get PDF
    kwartalni

    27th Annual European Symposium on Algorithms: ESA 2019, September 9-11, 2019, Munich/Garching, Germany

    Get PDF
    corecore