1,643 research outputs found

    A note on selecting maximals in finite spaces

    Full text link
    Given a choice problem, the maximization rule may select many alternatives. In such cases, it is common practice to interpret that the final choice will end up being made by some random procedure, assigning to any maximal alternative the same probability of being chosen. However, there may be reasons based on the same original preferences for which it is suitable to select certain maximal alternatives over others. This paper introduces two choice criteria induced by the original preferences such that maximizing with respect to each of them may give a finer selection of alternatives than maximizing with respect to the original preferences. Those criteria are built by means of several preference relations induced by the original preferences, namely, two (weak) dominance relations, two indirect preference relations and the dominance relations defined with the help of those indirect preferences. It is remarkable that as the original preferences approach being complete and transitive, those criteria become both simpler and closer to such preferences. In particular, they coincide with the original preferences when these are complete and transitive, in which case they provide the same solution as those preference

    A non-proposition-wise variant of majority voting for aggregating judgments

    Full text link
    Majority voting is commonly used in aggregating judgments. The literature to date on judgment aggregation (JA) has focused primarily on proposition-wise majority voting (PMV). Given a set of issues on which a group is trying to make collective judgments, PMV aggregates individual judgments issue by issue, and satisfies a salient property of JA rules—independence. This paper introduces a variant of majority voting called holistic majority voting (HMV). This new variant also meets the condition of independence. However, instead of aggregating judgments issue by issue, it aggregates individual judgments en bloc. A salient and straightforward feature of HMV is that it guarantees the logical consistency of the propositions expressing collective judgments, provided that the individual points of view are consistent. This feature contrasts with the known inability of PMV to guarantee the consistency of the collective outcome. Analogously, while PMV may present a set of judgments that have been rejected by everyone in the group as collectively accepted, the collective judgments returned by HMV have been accepted by a majority of individuals in the group and, therefore, rejected by a minority of them at most. In addition, HMV satisfies a large set of appealing properties, as PMV also does. However, HMV may not return any complete proposition expressing the judgments of the group on all the issues at stake, even in cases where PMV does. Moreover, demanding completeness from HMV leads to impossibility results similar to the known impossibilities on PMV and on proposition-wise JA rules in genera

    Building and Using Models as Examples

    Get PDF
    Sometimes, theoreticians explicitly state that they consider their models as examples. When this is not the case, it is fairly common for theoreticians to attribute to their models the characteristics and objectives of illustrative examples. However, this way of understanding models has not received enough attention in the methodological literature focused on economics. Given that didactic examples and their properties are extremely familiar in practice, considering theoretical models as examples can offer a useful perspective on models and their properties. On the basis of both explanatory and exemplifying role played by the deductive arguments by which results are proved, the paper emphasizes also the importance of understanding in theoretical work, the analogical and tentative character of the application of models, the central role played by the above mentioned arguments in such application, the didactic function of theory, and the transmision of plausibility from those arguments to the results obtained.models; examples; explanatory arguments; theoretical understanding; analogical application

    A pooling approach to judgment aggregation

    Full text link
    The literature has focused on a particular way of aggregating judgments: Given a set of yes or no questions or issues, the individuals’ judgments are then aggregated separately, issue by issue. Applied in this way, the majority method does not guarantee the logical consistency of the set of judgments obtained. This fact has been the focus of critiques of the majority method and similar procedures. This paper focuses on another way of aggregating judgments. The main difference is that aggregation is made en bloc on all the issues at stake. The main consequence is that the majority method applied in this way does always guarantee the logical consistency of the collective judgments. Since it satisfies a large set of attractive properties, it should provide the basis for more positive assessment if applied using the proposed pooling approach than if used separately. The paper extends the analysis to the pooling supermajority and plurality rules, with similar result

    Classical simulations of Abelian-group normalizer circuits with intermediate measurements

    Full text link
    Quantum normalizer circuits were recently introduced as generalizations of Clifford circuits [arXiv:1201.4867]: a normalizer circuit over a finite Abelian group GG is composed of the quantum Fourier transform (QFT) over G, together with gates which compute quadratic functions and automorphisms. In [arXiv:1201.4867] it was shown that every normalizer circuit can be simulated efficiently classically. This result provides a nontrivial example of a family of quantum circuits that cannot yield exponential speed-ups in spite of usage of the QFT, the latter being a central quantum algorithmic primitive. Here we extend the aforementioned result in several ways. Most importantly, we show that normalizer circuits supplemented with intermediate measurements can also be simulated efficiently classically, even when the computation proceeds adaptively. This yields a generalization of the Gottesman-Knill theorem (valid for n-qubit Clifford operations [quant-ph/9705052, quant-ph/9807006] to quantum circuits described by arbitrary finite Abelian groups. Moreover, our simulations are twofold: we present efficient classical algorithms to sample the measurement probability distribution of any adaptive-normalizer computation, as well as to compute the amplitudes of the state vector in every step of it. Finally we develop a generalization of the stabilizer formalism [quant-ph/9705052, quant-ph/9807006] relative to arbitrary finite Abelian groups: for example we characterize how to update stabilizers under generalized Pauli measurements and provide a normal form of the amplitudes of generalized stabilizer states using quadratic functions and subgroup cosets.Comment: 26 pages+appendices. Title has changed in this second version. To appear in Quantum Information and Computation, Vol.14 No.3&4, 201

    A Semi-Lagrangian Particle Level Set Finite Element Method for Interface Problems

    Get PDF
    We present a quasi-monotone semi-Lagrangian particle level set (QMSL-PLS) method for moving interfaces. The QMSL method is a blend of first order monotone and second order semi-Lagrangian methods. The QMSL-PLS method is easy to implement, efficient, and well adapted for unstructured, either simplicial or hexahedral, meshes. We prove that it is unconditionally stable in the maximum discrete norm, � · �h,∞, and the error analysis shows that when the level set solution u(t) is in the Sobolev space Wr+1,∞(D), r ≥ 0, the convergence in the maximum norm is of the form (KT/Δt)min(1,Δt � v �h,∞ /h)((1 − α)hp + hq), p = min(2, r + 1), and q = min(3, r + 1),where v is a velocity. This means that at high CFL numbers, that is, when Δt > h, the error is O( (1−α)hp+hq) Δt ), whereas at CFL numbers less than 1, the error is O((1 − α)hp−1 + hq−1)). We have tested our method with satisfactory results in benchmark problems such as the Zalesak’s slotted disk, the single vortex flow, and the rising bubble

    Normalizer Circuits and Quantum Computation

    Full text link
    (Abridged abstract.) In this thesis we introduce new models of quantum computation to study the emergence of quantum speed-up in quantum computer algorithms. Our first contribution is a formalism of restricted quantum operations, named normalizer circuit formalism, based on algebraic extensions of the qubit Clifford gates (CNOT, Hadamard and π/4\pi/4-phase gates): a normalizer circuit consists of quantum Fourier transforms (QFTs), automorphism gates and quadratic phase gates associated to a set GG, which is either an abelian group or abelian hypergroup. Though Clifford circuits are efficiently classically simulable, we show that normalizer circuit models encompass Shor's celebrated factoring algorithm and the quantum algorithms for abelian Hidden Subgroup Problems. We develop classical-simulation techniques to characterize under which scenarios normalizer circuits provide quantum speed-ups. Finally, we devise new quantum algorithms for finding hidden hyperstructures. The results offer new insights into the source of quantum speed-ups for several algebraic problems. Our second contribution is an algebraic (group- and hypergroup-theoretic) framework for describing quantum many-body states and classically simulating quantum circuits. Our framework extends Gottesman's Pauli Stabilizer Formalism (PSF), wherein quantum states are written as joint eigenspaces of stabilizer groups of commuting Pauli operators: while the PSF is valid for qubit/qudit systems, our formalism can be applied to discrete- and continuous-variable systems, hybrid settings, and anyonic systems. These results enlarge the known families of quantum processes that can be efficiently classically simulated. This thesis also establishes a precise connection between Shor's quantum algorithm and the stabilizer formalism, revealing a common mathematical structure in several quantum speed-ups and error-correcting codes.Comment: PhD thesis, Technical University of Munich (2016). Please cite original papers if possible. Appendix E contains unpublished work on Gaussian unitaries. If you spot typos/omissions please email me at JLastNames at posteo dot net. Source: http://bit.ly/2gMdHn3. Related video talk: https://www.perimeterinstitute.ca/videos/toy-theory-quantum-speed-ups-based-stabilizer-formalism Posted on my birthda

    Anticoncentration theorems for schemes showing a quantum speedup

    Get PDF
    One of the main milestones in quantum information science is to realise quantum devices that exhibit an exponential computational advantage over classical ones without being universal quantum computers, a state of affairs dubbed quantum speedup, or sometimes "quantum computational supremacy". The known schemes heavily rely on mathematical assumptions that are plausible but unproven, prominently results on anticoncentration of random prescriptions. In this work, we aim at closing the gap by proving two anticoncentration theorems and accompanying hardness results, one for circuit-based schemes, the other for quantum quench-type schemes for quantum simulations. Compared to the few other known such results, these results give rise to a number of comparably simple, physically meaningful and resource-economical schemes showing a quantum speedup in one and two spatial dimensions. At the heart of the analysis are tools of unitary designs and random circuits that allow us to conclude that universal random circuits anticoncentrate as well as an embedding of known circuit-based schemes in a 2D translation-invariant architecture.Comment: 12+2 pages, added applications sectio

    Protection of traditional knowledge from an economics perspective

    Full text link
    El artículo se centra en la propuesta de protección de los conocimientos tradicionales (CCTT) plasmada en los documentos elaborados por la secretaría del Comité Intergubernamental sobre Propiedad Intelectual y Recursos Genéticos, Conocimientos Tradicionales y Folclore, de la Organización Mundial de la Propiedad Intelectual. En un primer momento, se destacan los rasgos de esa propuesta de protección que pueden resultar más determinantes desde el punto de vista económico. En segundo lugar, se analiza cómo pueden desarrollarse los distintos tipos de intercambios bajo esa protección. Respecto de los intercambios en los que la transferencia de conocimientos desempeña un papel destacado, el artículo subraya los problemas de asimetrías informativas que pueden operar en contra de los intereses de las comunidades. De donde se desprende que puede ser muy conveniente por lo general que las comunidades cuenten con algún apoyo externo, preferentemente público. Respecto de los intercambios de productos derivados de los CCTT se argumenta que otras medidas de desarrollo más directas que la protección pueden ser más efectivas. Y se subraya la ayuda que los signos distintivos como las marcas colectivas, las denominaciones de origen protegidas y las indicaciones geográficas protegidas pueden prestar. En una sección posterior, se compara, contraponiéndolas, la orientación y justificación de la protección propuesta en la documentación mencionada con la visión que la Economía del Bienestar ofrece de la propiedad intelectual e industrial. El artículo finaliza sugiriendo que, no obstante lo anterior, el proceso de internacionalización del sistema de la propiedad intelectual e industrial puede justificar, por razones de reciprocidad, medidas o sistemas de protección de los CCTTThis article focuses on the proposal of the protection of traditional knowledge expressed in documents prepared by the Intergovernmental Committee on Intellectual Property and Genetic Resources, Traditional Knowledge and Folklore, of the World Intellectual Property Organisation. Firstly, the characteristics of this protection proposal that could result in more determining factors from an economics point of view are highlighted. Secondly, an analysis is made of how the different types of exchanges covered by this protection may be developed. As regards the exchanges in which the transfer of knowledge plays an important role, the article underlines the problems of informational asymmetry that can work against the interests of the community. From where it emerges that it may be generally more advisable that the communities have some external, preferably public, support. As regards exchanges of products arising from traditional knowledge, it is argued that the protection may be more effective by other more direct means. The help of distinctive signs, such as collective marks, is underlined, as well as protected denominations of origin, and protected geographic indications that may be used. In a later section, a comparison is made by contrasting the orientation and justification of the protection proposal in the previously mentioned documents with the justifi cation that Welfare Economics offers to intellectual and industrial property. The article ends by suggesting that, notwithstanding the aforementioned, the internationalisation process of the intellectual and industrial property may justify, for reasons of reciprocity, measures or protection of traditional knowledgeEste trabajo ha sido realizado en el marco del proyecto FONCICYT 95255, titulado ‘Conservación, desarrollo, aprovechamiento social y protección de los conocimientos y recursos tradicionales en Méxic
    corecore