12 research outputs found

    A Vector Monotonicity Assumption for Multiple Instruments

    Full text link
    When a researcher wishes to use multiple instrumental variables for a single binary treatment, the familiar LATE monotonicity assumption can become restrictive: it requires that all units share a common direction of response even when different instruments are shifted in opposing directions. What I call vector monotonicity, by contrast, simply restricts treatment status to be monotonic in each instrument separately. This is a natural assumption in many contexts, capturing the intuitive notion of "no defiers" for each instrument. I show that in a setting with a binary treatment and multiple discrete instruments, a class of causal parameters is point identified under vector monotonicity, including the average treatment effect among units that are responsive to any particular subset of the instruments. I propose a simple "2SLS-like" estimator for the family of identified treatment effect parameters. An empirical application revisits the labor market returns to college education.Comment: 56 pages, 6 figure

    Graphs for Pattern Recognition

    Get PDF
    This monograph deals with mathematical constructions that are foundational in such an important area of data mining as pattern recognition. By using combinatorial and graph theoretic techniques, a closer look is taken at infeasible systems of linear inequalities, whose generalized solutions act as building blocks of geometric decision rules for pattern recognition. Infeasible systems of linear inequalities prove to be a key object in pattern recognition problems described in geometric terms thanks to the committee method. Such infeasible systems of inequalities represent an important special subclass of infeasible systems of constraints with a monotonicity property – systems whose multi-indices of feasible subsystems form abstract simplicial complexes (independence systems), which are fundamental objects of combinatorial topology. The methods of data mining and machine learning discussed in this monograph form the foundation of technologies like big data and deep learning, which play a growing role in many areas of human-technology interaction and help to find solutions, better solutions and excellent solutions

    Complexity Theory, Game Theory, and Economics: The Barbados Lectures

    Full text link
    This document collects the lecture notes from my mini-course "Complexity Theory, Game Theory, and Economics," taught at the Bellairs Research Institute of McGill University, Holetown, Barbados, February 19--23, 2017, as the 29th McGill Invitational Workshop on Computational Complexity. The goal of this mini-course is twofold: (i) to explain how complexity theory has helped illuminate several barriers in economics and game theory; and (ii) to illustrate how game-theoretic questions have led to new and interesting complexity theory, including recent several breakthroughs. It consists of two five-lecture sequences: the Solar Lectures, focusing on the communication and computational complexity of computing equilibria; and the Lunar Lectures, focusing on applications of complexity theory in game theory and economics. No background in game theory is assumed.Comment: Revised v2 from December 2019 corrects some errors in and adds some recent citations to v1 Revised v3 corrects a few typos in v

    The convexification effect of Minkowski summation

    Full text link
    Let us define for a compact set ARnA \subset \mathbb{R}^n the sequence A(k)={a1++akk:a1,,akA}=1k(A++Ak times). A(k) = \left\{\frac{a_1+\cdots +a_k}{k}: a_1, \ldots, a_k\in A\right\}=\frac{1}{k}\Big(\underset{k\ {\rm times}}{\underbrace{A + \cdots + A}}\Big). It was independently proved by Shapley, Folkman and Starr (1969) and by Emerson and Greenleaf (1969) that A(k)A(k) approaches the convex hull of AA in the Hausdorff distance induced by the Euclidean norm as kk goes to \infty. We explore in this survey how exactly A(k)A(k) approaches the convex hull of AA, and more generally, how a Minkowski sum of possibly different compact sets approaches convexity, as measured by various indices of non-convexity. The non-convexity indices considered include the Hausdorff distance induced by any norm on Rn\mathbb{R}^n, the volume deficit (the difference of volumes), a non-convexity index introduced by Schneider (1975), and the effective standard deviation or inner radius. After first clarifying the interrelationships between these various indices of non-convexity, which were previously either unknown or scattered in the literature, we show that the volume deficit of A(k)A(k) does not monotonically decrease to 0 in dimension 12 or above, thus falsifying a conjecture of Bobkov et al. (2011), even though their conjecture is proved to be true in dimension 1 and for certain sets AA with special structure. On the other hand, Schneider's index possesses a strong monotonicity property along the sequence A(k)A(k), and both the Hausdorff distance and effective standard deviation are eventually monotone (once kk exceeds nn). Along the way, we obtain new inequalities for the volume of the Minkowski sum of compact sets, falsify a conjecture of Dyn and Farkhi (2004), demonstrate applications of our results to combinatorial discrepancy theory, and suggest some questions worthy of further investigation.Comment: 60 pages, 7 figures. v2: Title changed. v3: Added Section 7.2 resolving Dyn-Farkhi conjectur

    LIPIcs, Volume 251, ITCS 2023, Complete Volume

    Get PDF
    LIPIcs, Volume 251, ITCS 2023, Complete Volum

    Advances and Applications of DSmT for Information Fusion

    Get PDF
    This book is devoted to an emerging branch of Information Fusion based on new approach for modelling the fusion problematic when the information provided by the sources is both uncertain and (highly) conflicting. This approach, known in literature as DSmT (standing for Dezert-Smarandache Theory), proposes new useful rules of combinations

    Advances and Applications of Dezert-Smarandache Theory (DSmT), Vol. 1

    Get PDF
    The Dezert-Smarandache Theory (DSmT) of plausible and paradoxical reasoning is a natural extension of the classical Dempster-Shafer Theory (DST) but includes fundamental differences with the DST. DSmT allows to formally combine any types of independent sources of information represented in term of belief functions, but is mainly focused on the fusion of uncertain, highly conflicting and imprecise quantitative or qualitative sources of evidence. DSmT is able to solve complex, static or dynamic fusion problems beyond the limits of the DST framework, especially when conflicts between sources become large and when the refinement of the frame of the problem under consideration becomes inaccessible because of vague, relative and imprecise nature of elements of it. DSmT is used in cybernetics, robotics, medicine, military, and other engineering applications where the fusion of sensors\u27 information is required

    Practical and Foundational Aspects of Secure Computation

    Full text link
    Il y a des problemes qui semblent impossible a resoudre sans l'utilisation d'un tiers parti honnete. Comment est-ce que deux millionnaires peuvent savoir qui est le plus riche sans dire a l'autre la valeur de ses biens ? Que peut-on faire pour prevenir les collisions de satellites quand les trajectoires sont secretes ? Comment est-ce que les chercheurs peuvent apprendre les liens entre des medicaments et des maladies sans compromettre les droits prives du patient ? Comment est-ce qu'une organisation peut ecmpecher le gouvernement d'abuser de l'information dont il dispose en sachant que l'organisation doit n'avoir aucun acces a cette information ? Le Calcul multiparti, une branche de la cryptographie, etudie comment creer des protocoles pour realiser de telles taches sans l'utilisation d'un tiers parti honnete. Les protocoles doivent etre prives, corrects, efficaces et robustes. Un protocole est prive si un adversaire n'apprend rien de plus que ce que lui donnerait un tiers parti honnete. Un protocole est correct si un joueur honnete recoit ce que lui donnerait un tiers parti honnete. Un protocole devrait bien sur etre efficace. Etre robuste correspond au fait qu'un protocole marche meme si un petit ensemble des joueurs triche. On demontre que sous l'hypothese d'un canal de diusion simultane on peut echanger la robustesse pour la validite et le fait d'etre prive contre certains ensembles d'adversaires. Le calcul multiparti a quatre outils de base : le transfert inconscient, la mise en gage, le partage de secret et le brouillage de circuit. Les protocoles du calcul multiparti peuvent etre construits avec uniquements ces outils. On peut aussi construire les protocoles a partir d'hypoth eses calculatoires. Les protocoles construits a partir de ces outils sont souples et peuvent resister aux changements technologiques et a des ameliorations algorithmiques. Nous nous demandons si l'efficacite necessite des hypotheses de calcul. Nous demontrons que ce n'est pas le cas en construisant des protocoles efficaces a partir de ces outils de base. Cette these est constitue de quatre articles rediges en collaboration avec d'autres chercheurs. Ceci constitue la partie mature de ma recherche et sont mes contributions principales au cours de cette periode de temps. Dans le premier ouvrage presente dans cette these, nous etudions la capacite de mise en gage des canaux bruites. Nous demontrons tout d'abord une limite inferieure stricte qui implique que contrairement au transfert inconscient, il n'existe aucun protocole de taux constant pour les mises en gage de bit. Nous demontrons ensuite que, en limitant la facon dont les engagements peuvent etre ouverts, nous pouvons faire mieux et meme un taux constant dans certains cas. Ceci est fait en exploitant la notion de cover-free families . Dans le second article, nous demontrons que pour certains problemes, il existe un echange entre robustesse, la validite et le prive. Il s'effectue en utilisant le partage de secret veriable, une preuve a divulgation nulle, le concept de fantomes et une technique que nous appelons les balles et les bacs. Dans notre troisieme contribution, nous demontrons qu'un grand nombre de protocoles dans la litterature basee sur des hypotheses de calcul peuvent etre instancies a partir d'une primitive appelee Transfert Inconscient Veriable, via le concept de Transfert Inconscient Generalise. Le protocole utilise le partage de secret comme outils de base. Dans la derniere publication, nous counstruisons un protocole efficace avec un nombre constant de rondes pour le calcul a deux parties. L'efficacite du protocole derive du fait qu'on remplace le coeur d'un protocole standard par une primitive qui fonctionne plus ou moins bien mais qui est tres peu couteux. On protege le protocole contre les defauts en utilisant le concept de privacy amplication .There are seemingly impossible problems to solve without a trusted third-party. How can two millionaires learn who is the richest when neither is willing to tell the other how rich he is? How can satellite collisions be prevented when the trajectories are secret? How can researchers establish correlations between diseases and medication while respecting patient confidentiality? How can an organization insure that the government does not abuse the knowledge that it possesses even though such an organization would be unable to control that information? Secure computation, a branch of cryptography, is a eld that studies how to generate protocols for realizing such tasks without the use of a trusted third party. There are certain goals that such protocols should achieve. The rst concern is privacy: players should learn no more information than what a trusted third party would give them. The second main goal is correctness: players should only receive what a trusted third party would give them. The protocols should also be efficient. Another important property is robustness, the protocols should not abort even if a small set of players is cheating. Secure computation has four basic building blocks : Oblivious Transfer, secret sharing, commitment schemes, and garbled circuits. Protocols can be built based only on these building blocks or alternatively, they can be constructed from specific computational assumptions. Protocols constructed solely from these primitives are flexible and are not as vulnerable to technological or algorithmic improvements. Many protocols are nevertheless based on computational assumptions. It is important to ask if efficiency requires computational assumptions. We show that this is not the case by building efficient protocols from these primitives. It is the conclusion of this thesis that building protocols from black-box primitives can also lead to e cient protocols. This thesis is a collection of four articles written in collaboration with other researchers. This constitutes the mature part of my investigation and is my main contributions to the field during that period of time. In the first work presented in this thesis we study the commitment capacity of noisy channels. We first show a tight lower bound that implies that in contrast to Oblivious Transfer, there exists no constant rate protocol for bit commitments. We then demonstrate that by restricting the way the commitments can be opened, we can achieve better efficiency and in particular cases, a constant rate. This is done by exploiting the notion of cover-free families. In the second article, we show that for certain problems, there exists a trade-off between robustness, correctness and privacy. This is done by using verifiable secret sharing, zero-knowledge, the concept of ghosts and a technique which we call \balls and bins". In our third contribution, we show that many protocols in the literature based on specific computational assumptions can be instantiated from a primitive known as Verifiable Oblivious Transfer, via the concept of Generalized Oblivious Transfer. The protocol uses secret sharing as its foundation. In the last included publication, we construct a constant-round protocol for secure two-party computation that is very efficient and only uses black-box primitives. The remarkable efficiency of the protocol is achieved by replacing the core of a standard protocol by a faulty but very efficient primitive. The fault is then dealt with by a non-trivial use of privacy amplification
    corecore