17 research outputs found

    Rational proofs

    Get PDF
    We study a new type of proof system, where an unbounded prover and a polynomial time verifier interact, on inputs a string x and a function f, so that the Verifier may learn f(x). The novelty of our setting is that there no longer are "good" or "malicious" provers, but only rational ones. In essence, the Verifier has a budget c and gives the Prover a reward r ∈ [0,c] determined by the transcript of their interaction; the prover wishes to maximize his expected reward; and his reward is maximized only if he the verifier correctly learns f(x). Rational proof systems are as powerful as their classical counterparts for polynomially many rounds of interaction, but are much more powerful when we only allow a constant number of rounds. Indeed, we prove that if f ∈ #P, then f is computable by a one-round rational Merlin-Arthur game, where, on input x, Merlin's single message actually consists of sending just the value f(x). Further, we prove that CH, the counting hierarchy, coincides with the class of languages computable by a constant-round rational Merlin-Arthur game. Our results rely on a basic and crucial connection between rational proof systems and proper scoring rules, a tool developed to elicit truthful information from experts.United States. Office of Naval Research (Award number N00014-09-1-0597

    The Wisdom of Twitter Crowds:Predicting Stock Market Reactions to FOMC Meetings via Twitter Feeds

    Get PDF
    With the rise of social media, investors have a new tool for measuring sentiment in real time. However, the nature of these data sources raises serious questions about its quality. Because anyone on social media can participate in a conversation about markets—whether the individual is informed or not—these data may have very little information about future asset prices. In this article, the authors show that this is not the case. They analyze a recurring event that has a high impact on asset prices—Federal Open Market Committee (FOMC) meetings—and exploit a new dataset of tweets referencing the Federal Reserve. The authors show that the content of tweets can be used to predict future returns, even after controlling for common asset pricing factors. To gauge the economic magnitude of these predictions, the authors construct a simple hypothetical trading strategy based on this data. They find that a tweet-based asset allocation strategy outperforms several benchmarks—including a strategy that buys and holds a market index, as well as a comparable dynamic asset allocation strategy that does not use Twitter information

    Crowdsourced Bayesian auctions

    Get PDF
    We investigate the problem of optimal mechanism design, where an auctioneer wants to sell a set of goods to buyers, in order to maximize revenue. In a Bayesian setting the buyers' valuations for the goods are drawn from a prior distribution D, which is often assumed to be known by the seller. In this work, we focus on cases where the seller has no knowledge at all, and "the buyers know each other better than the seller knows them". In our model, D is not necessarily common knowledge. Instead, each buyer individually knows a posterior distribution associated with D. Since the seller relies on the buyers' knowledge to help him set a price, we call these types of auctions crowdsourced Bayesian auctions. For this crowdsourced Bayesian model and many environments of interest, we show that, for arbitrary valuation distributions D (in particular, correlated ones), it is possible to design mechanisms matching to a significant extent the performance of the optimal dominant-strategy-truthful mechanisms where the seller knows D. To obtain our results, we use two techniques: (1) proper scoring rules to elicit information from the players; and (2) a reverse version of the classical Bulow-Klemperer inequality. The first lets us build mechanisms with a unique equilibrium and good revenue guarantees, even when the players' second and higher-order beliefs about each other are wrong. The second allows us to upper bound the revenue of an optimal mechanism with n players by an n/n--1 fraction of the revenue of the optimal mechanism with n -- 1 players. We believe that both techniques are new to Bayesian optimal auctions and of independent interest for future work.United States. Office of Naval Research (Grant number N00014-09-1-0597

    The Changing Landscape for Stroke\ua0Prevention in AF: Findings From the GLORIA-AF Registry Phase 2

    Get PDF
    Background GLORIA-AF (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients with Atrial Fibrillation) is a prospective, global registry program describing antithrombotic treatment patterns in patients with newly diagnosed nonvalvular atrial fibrillation at risk of stroke. Phase 2 began when dabigatran, the first non\u2013vitamin K antagonist oral anticoagulant (NOAC), became available. Objectives This study sought to describe phase 2 baseline data and compare these with the pre-NOAC era collected during phase 1. Methods During phase 2, 15,641 consenting patients were enrolled (November 2011 to December 2014); 15,092 were eligible. This pre-specified cross-sectional analysis describes eligible patients\u2019 baseline characteristics. Atrial fibrillation disease characteristics, medical outcomes, and concomitant diseases and medications were collected. Data were analyzed using descriptive statistics. Results Of the total patients, 45.5% were female; median age was 71 (interquartile range: 64, 78) years. Patients were from Europe (47.1%), North America (22.5%), Asia (20.3%), Latin America (6.0%), and the Middle East/Africa (4.0%). Most had high stroke risk (CHA2DS2-VASc [Congestive heart failure, Hypertension, Age  6575 years, Diabetes mellitus, previous Stroke, Vascular disease, Age 65 to 74 years, Sex category] score  652; 86.1%); 13.9% had moderate risk (CHA2DS2-VASc = 1). Overall, 79.9% received oral anticoagulants, of whom 47.6% received NOAC and 32.3% vitamin K antagonists (VKA); 12.1% received antiplatelet agents; 7.8% received no antithrombotic treatment. For comparison, the proportion of phase 1 patients (of N = 1,063 all eligible) prescribed VKA was 32.8%, acetylsalicylic acid 41.7%, and no therapy 20.2%. In Europe in phase 2, treatment with NOAC was more common than VKA (52.3% and 37.8%, respectively); 6.0% of patients received antiplatelet treatment; and 3.8% received no antithrombotic treatment. In North America, 52.1%, 26.2%, and 14.0% of patients received NOAC, VKA, and antiplatelet drugs, respectively; 7.5% received no antithrombotic treatment. NOAC use was less common in Asia (27.7%), where 27.5% of patients received VKA, 25.0% antiplatelet drugs, and 19.8% no antithrombotic treatment. Conclusions The baseline data from GLORIA-AF phase 2 demonstrate that in newly diagnosed nonvalvular atrial fibrillation patients, NOAC have been highly adopted into practice, becoming more frequently prescribed than VKA in Europe and North America. Worldwide, however, a large proportion of patients remain undertreated, particularly in Asia and North America. (Global Registry on Long-Term Oral Antithrombotic Treatment in Patients With Atrial Fibrillation [GLORIA-AF]; NCT01468701

    Essays in network economics

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Economics, 2019Cataloged from PDF version of thesis.Includes bibliographical references.This thesis is a collection of three chapters, each representing an individual paper. The first chapter studies how the formation of supply chains affects economic growth. It provides a new tractable model for supply chain formation. The main innovation in this model is that, firms can choose suppliers to maximize profits. Individual firms' actions determine the equilibrium input-output network, and affect macroeconomic variables such as GDP. We then apply this model to understand the effect of changing supply chains on American productivity during the 1987-2007 period. The second chapter studies how a monopolist may sell multiple goods to strategic bidders. The monopolist may face a series of combinatorial constraints. For example, it may be forced to allocate at most one good to each bidder, and it may have additional constraints on which bidders can be allocated which goods. Furthermore, the monopolist does not know bidders' demand distributions. Rather, it only knows one sample from the demand distribution corresponding to each bidder. Nevertheless, by developing new online optimization algorithms, we show how simple mechanisms can approximate the monopolist's optimal revenue. Finally, the third chapter, develops a new model of firm optimization to understand how shrinking electronics have contributed to increased productivity and welfare in the United States, during the 2002-2017 period. In this model, firms face constraints on the size of the products they can build. As intermediate inputs, such as electronics, shrink, the firms' production possibilities frontier expands, and GDP increases.by Pablo Daniel Azar.Ph. D.Ph.D. Massachusetts Institute of Technology, Department of Economic

    Super-efficient rational proofs

    No full text
    Thesis: Ph. D., Massachusetts Institute of Technology, Department of Electrical Engineering and Computer Science, 2014.Cataloged from PDF version of thesis.Includes bibliographical references (pages 47-49).Information asymmetry is a central problem in both computer science and economics. In many fundamental problems, an uninformed principal wants to obtain some knowledge from an untrusted expert. This models several real-world situations, such as a manager's relation with her employees, or the delegation of computational tasks to workers over the internet. Because the expert is untrusted, the principal needs some guarantee that the provided knowledge is correct. In computer science, this guarantee is usually provided via a proof, which the principal can verify. Thus, a dishonest expert will always get caught and penalized. In many economic settings, the guarantee that the knowledge is correct is usually provided via incentives. That is, a game is played between expert and principal such that the expert maximizes her utility by being honest. A rational proof is an interactive proof where the prover, Merlin, is neither honest nor malicious, but rational. That is, Merlin acts in order to maximize his own utility. I previously introduced and studied Rational Proofs when the verifier, Arthur, is a probabilistic polynomial-time machine [3]. In this thesis, I characterize super-efficient rational proofs, that is, rational proofs where Arthur runs in logarithmic time. These new rational proofs are very practical. Not only are they much faster than their classical analogues, but they also provide very tangible incentives for the expert to be honest. Arthur only needs a polynomial-size budget, yet he can penalize Merlin by a large quantity if he deviates from the truth.by Pablo Daniel Azar.Ph. D

    Computational principal-agent problems

    No full text
    Collecting and processing large amounts of data is becoming increasingly crucial in our society. We model this task as evaluating a function f over a large vector x=(x1,
,xn), which is unknown, but drawn from a publicly known distribution X. In our model, learning each component of the input x is costly, but computing the output f(x) has zero cost once x is known. We consider the problem of a principal who wishes to delegate the evaluation of f to an agent whose cost of learning any number of components of x is always lower than the corresponding cost of the principal. We prove that, for every continuous function f and every Ï”>0, the principal can—by learning a single component xi of x—incentivize the agent to report the correct value f(x) with accuracy Ï”. complexity. Copyright ©2018 The Authors.Robert Solow FellowshipStanley and Rhoda Fischer Fellowshi

    Endogenous Production Networks

    No full text
    We develop a tractable model of endogenous production networks. Each one of a number of products can be produced by combining labor and an endogenous subset of the other products as inputs. Different combinations of inputs generate (prespecified) levels of productivity and various distortions may affect costs and prices. We establish the existence and uniqueness of an equilibrium and provide comparative static results on how prices and endogenous technology/input choices (and thus the production network) respond to changes in parameters. These results show that improvements in technology (or reductions in distortions) spread throughout the economy via input–output linkages and reduce all prices, and under reasonable restrictions on the menu of production technologies, also lead to a denser production network. Using a dynamic version of the model, we establish that the endogenous evolution of the production network could be a powerful force towards sustained economic growth. At the root of this result is the fact that the arrival of a few new products expands the set of technological n products, the arrival of one more new product increases the combinations of inputs that each existing product can use from 2 (superscript n−1) to 2 (superscript n) thus enabling significantly more pronounced cost reductions from choice of input combinations. These cost reductions then spread to other industries via lower input prices and incentivize them to also adopt additional inputs

    How to Incentivize Data-Driven Collaboration Among Competing Parties

    No full text
    The availability of vast amounts of data is changing how we can make medical discoveries, predict global market trends, save energy, and develop new educational strategies. In certain settings such as Genome Wide Association Studies or deep learning, the sheer size of data (patient files or labeled examples) seems critical to making discoveries. When data is held distributedly by many parties, as often is the case, they must share it to reap its full benefits. One obstacle is the reluctance of different entities to share their data, due to privacy concerns or loss of competitive edge. Work on cryptographic multi-party computation over the last 30 years address the privacy aspects, but sheds no light on individual parties' losses and gains when access to data carries tangible rewards. Even if it is clear that better overall conclusions can be drawn from collaboration, are individual collaborators better off by collaborating? Addressing this question is the topic of this paper. The order in which collaborators receive the outputs of a collaboration will be a crucial aspect of our modeling and solutions. We believe that timing is an important and unaddressed issue in data-based collaborations. Our contributions are as follows. We formalize a model of n-party collaboration for computing functions over private inputs in which the participants receive their outputs in sequence, and the order depends on their private inputs. Each output \improves" on all previous outputs according to a reward function. We say that a mechanism for collabo-ration achieves a collaborative equilibrium if it guarantees a higher reward for all participants when joining a collaboration compared to not joining it. We show that while in general computing a collaborative equilibrium is NP-complete, we can design polynomial-Time algorithms for computing it for a range of natural model settings. When possible, we design mechanisms to compute a distribution of outputs and an ordering of output delivery, based on the n participants' private inputs, which achieves a collaborative equilibrium. The collaboration mechanisms we develop are in the standard model, and thus require a central trusted party; however, we show that this assumption is not necessary under standard cryptographic assumptions. We show how the mechanisms can be implemented in a decentralized way by n distrustful parties using new extensions of classical secure multiparty computation that impose order and timing constraints on the delivery of outputs to different players, in addition to guaranteeing privacy and correctness

    Law Is Code: A Software Engineering Approach to Analyzing the United States Code

    Get PDF
    The agglomeration of rules and regulations over time has produced a body of legal code that no single individual can fully comprehend. This complexity produces inefficiencies, makes the processes of understanding and changing the law difficult, and frustrates the fundamental principle that the law should provide fair notice to the governed. In this Article, we take a quantitative, unbiased, and software-engineering approach to analyze the evolution of the United States Code from 1926 to today. Software engineers frequently face the challenge of understanding and managing large, structured collections of instructions, directives, and conditional statements, and we adapt and apply their techniques to the U.S. Code over time. Our work produces insights into the structure of the U.S. Code as a whole, its strengths and vulnerabilities, and new ways of thinking about individual laws. For example, we identify the first appearance and spread of important terms in the U.S. Code like “whistleblower” and “privacy.” We also analyze and visualize the network structure of certain substantial reforms, including the Patient Protection and Affordable Care Act and the Dodd-Frank Wall Street Reform and Consumer Protection Act, and show how the interconnections of references can increase complexity and create the potential for unintended consequences. Our work is a timely illustration of computational approaches to law as the legal profession embraces technology for scholarship in order to increase efficiency and to improve access to justice.MIT Laboratory for Financial Engineerin
    corecore