24,227 research outputs found

    Rate-Distortion for Ranking with Incomplete Information

    Get PDF
    We study the rate-distortion relationship in the set of permutations endowed with the Kendall t-metric and the Chebyshev metric. Our study is motivated by the application of permutation rate-distortion to the average-case and worst-case analysis of algorithms for ranking with incomplete information and approximate sorting algorithms. For the Kendall t-metric we provide bounds for small, medium, and large distortion regimes, while for the Chebyshev metric we present bounds that are valid for all distortions and are especially accurate for small distortions. In addition, for the Chebyshev metric, we provide a construction for covering codes

    A practical guide and software for analysing pairwise comparison experiments

    Get PDF
    Most popular strategies to capture subjective judgments from humans involve the construction of a unidimensional relative measurement scale, representing order preferences or judgments about a set of objects or conditions. This information is generally captured by means of direct scoring, either in the form of a Likert or cardinal scale, or by comparative judgments in pairs or sets. In this sense, the use of pairwise comparisons is becoming increasingly popular because of the simplicity of this experimental procedure. However, this strategy requires non-trivial data analysis to aggregate the comparison ranks into a quality scale and analyse the results, in order to take full advantage of the collected data. This paper explains the process of translating pairwise comparison data into a measurement scale, discusses the benefits and limitations of such scaling methods and introduces a publicly available software in Matlab. We improve on existing scaling methods by introducing outlier analysis, providing methods for computing confidence intervals and statistical testing and introducing a prior, which reduces estimation error when the number of observers is low. Most of our examples focus on image quality assessment.Comment: Code available at https://github.com/mantiuk/pwcm

    Asset Market Structures and Monetary Policy in a Small Open Economy

    Get PDF
    This paper sets up a canonical new Keynesian small open economy model with nominal price rigidities to explore the impact of habit persistence and exchange rate pass-through on the welfare ranking of alternative monetary policy rules. It identifies three factors that can affect the welfare ranking: the degree of habit persistence, the degree of exchange rate pass-through, and labor supply elasticity. In contrast to the findings of De Paoli (2009a, 2009b), the analysis reveals a reversal in the welfare ranking of alternative monetary policy rules for unitary intertemporal and intratemporal elasticities of substitution, depending on the asset market structures of small open economies with external habit. The paper also finds that exchange rate pegging outperforms domestic producer price index inflation targeting at high degrees of intratemporal elasticity of substitution and external habit, regardless of asset market structures. Finally, the paper finds that exchange rate pegging outperforms domestic or consumer price index inflation targeting if the exchange rate is misaligned.asset market structures; exchange rate peg; monetary policy rules

    Monetary Policy Under Alterative Asset Market Structures: the Case of a Small Open Economy

    Get PDF
    Can the structure of asset markets change the way monetary policy should be conducted? Following a linear-quadratic approach, the present paper addresses this question in a New Keynesian small open economy framework. Our results reveal that the configuration of asset markets significantly affects optimal monetary policy and the performance of standard policy rules. In particular, when comparing complete and incomplete markets, the ranking of policy rules is entirely reversed, and so are the policy prescriptions regarding the optimal level of exchange rate volatility.Welfare, Optimal Monetary Policy, Asset Markets, Small Open Economy

    Trial and settlement negotiations between asymmetrically skilled parties

    Get PDF
    Parties engaged in a litigation generally enter the discovery process with different informations regarding their case and/or an unequal endowment in terms of skill and ability to produce evidence and predict the outcome of a trial. Hence, they have to bear different legal costs to assess the (equilibrium) plaintiff's win rate. The paper analyses pretrial negotiations and revisits the selection hypothesis in the case where these legal expenditures are private information. This assumption is consistent with empirical evidence (Osborne, 1999). Two alternative situations are investigated, depending on whether there exists a unilateral or a bilateral informational asymmetry.\ Our general result is that efficient pretrial negotiations select cases with the smallest legal expenditures as those going to trial, while cases with largest costs prefer to settle. Under the one-sided asymmetric information assumption, we find that the American rule yields more trials and higher aggregate legal expenditures than the French and British rules. The two-sided case leads to a higher rate of trials, but in contrast provides less clear-cut predictions regarding the influence of fee-shifting.litigation, unilateral and bilateral asymmetric information, legal expenditures

    Analysis of Crowdsourced Sampling Strategies for HodgeRank with Sparse Random Graphs

    Full text link
    Crowdsourcing platforms are now extensively used for conducting subjective pairwise comparison studies. In this setting, a pairwise comparison dataset is typically gathered via random sampling, either \emph{with} or \emph{without} replacement. In this paper, we use tools from random graph theory to analyze these two random sampling methods for the HodgeRank estimator. Using the Fiedler value of the graph as a measurement for estimator stability (informativeness), we provide a new estimate of the Fiedler value for these two random graph models. In the asymptotic limit as the number of vertices tends to infinity, we prove the validity of the estimate. Based on our findings, for a small number of items to be compared, we recommend a two-stage sampling strategy where a greedy sampling method is used initially and random sampling \emph{without} replacement is used in the second stage. When a large number of items is to be compared, we recommend random sampling with replacement as this is computationally inexpensive and trivially parallelizable. Experiments on synthetic and real-world datasets support our analysis

    Scoliosis : density-equalizing mapping and scientometric analysis

    Get PDF
    Background: Publications related to scoliosis have increased enormously. A differentiation between publications of major and minor importance has become difficult even for experts. Scientometric data on developments and tendencies in scoliosis research has not been available to date. The aim of the current study was to evaluate the scientific efforts of scoliosis research both quantitatively and qualitatively. Methods: Large-scale data analysis, density-equalizing algorithms and scientometric methods were used to evaluate both the quantity and quality of research achievements of scientists studying scoliosis. Density-equalizing algorithms were applied to data retrieved from ISI-Web. Results: From 1904 to 2007, 8,186 items pertaining to scoliosis were published and included in the database. The studies were published in 76 countries: the USA, the U.K. and Canada being the most productive centers. The Washington University (St. Louis, Missouri) was identified as the most prolific institution during that period, and orthopedics represented by far the most productive medical discipline. "BRADFORD, DS" is the most productive author (146 items), and "DANSEREAU, J" is the author with the highest scientific impact (h-index of 27). Conclusion: Our results suggest that currently established measures of research output (i.e. impact factor, h-index) should be evaluated critically because phenomena, such as self-citation and co-authorship, distort the results and limit the value of the conclusions that may be drawn from these measures. Qualitative statements are just tractable by the comparison of the parameters with respect to multiple linkages. In order to obtain more objective evaluation tools, new measurements need to be developed
    • …
    corecore