44 research outputs found

    Economic Efficiency Requires Interaction

    Full text link
    We study the necessity of interaction between individuals for obtaining approximately efficient allocations. The role of interaction in markets has received significant attention in economic thinking, e.g. in Hayek's 1945 classic paper. We consider this problem in the framework of simultaneous communication complexity. We analyze the amount of simultaneous communication required for achieving an approximately efficient allocation. In particular, we consider two settings: combinatorial auctions with unit demand bidders (bipartite matching) and combinatorial auctions with subadditive bidders. For both settings we first show that non-interactive systems have enormous communication costs relative to interactive ones. On the other hand, we show that limited interaction enables us to find approximately efficient allocations

    Inapproximability of Truthful Mechanisms via Generalizations of the VC Dimension

    Full text link
    Algorithmic mechanism design (AMD) studies the delicate interplay between computational efficiency, truthfulness, and optimality. We focus on AMD's paradigmatic problem: combinatorial auctions. We present a new generalization of the VC dimension to multivalued collections of functions, which encompasses the classical VC dimension, Natarajan dimension, and Steele dimension. We present a corresponding generalization of the Sauer-Shelah Lemma and harness this VC machinery to establish inapproximability results for deterministic truthful mechanisms. Our results essentially unify all inapproximability results for deterministic truthful mechanisms for combinatorial auctions to date and establish new separation gaps between truthful and non-truthful algorithms

    Systematic identification of edited microRNAs in the human brain

    Get PDF
    Adenosine-to-inosine (A-to-I) editing modifies RNA transcripts from their genomic blueprint. A prerequisite for this process is a double-stranded RNA (dsRNA) structure. Such dsRNAs are formed as part of the microRNA (miRNA) maturation process, and it is therefore expected that miRNAs are affected by A-to-I editing. Editing of miRNAs has the potential to add another layer of complexity to gene regulation pathways, especially if editing occurs within the miRNA–mRNA recognition site. Thus, it is of interest to study the extent of this phenomenon. Current reports in the literature disagree on its extent; while some reports claim that it may be widespread, others deem the reported events as rare. Utilizing a next-generation sequencing (NGS) approach supplemented by an extensive bioinformatic analysis, we were able to systematically identify A-to-I editing events in mature miRNAs derived from human brain tissues. Our algorithm successfully identified many of the known editing sites in mature miRNAs and revealed 17 novel human sites, 12 of which are in the recognition sites of the miRNAs. We confirmed most of the editing events using in vitro ADAR overexpression assays. The editing efficiency of most sites identified is very low. Similar results are obtained for publicly available data sets of mouse brain-regions tissues. Thus, we find that A-to-I editing does alter several miRNAs, but it is not widespread

    Model evaluation for extreme risks

    Full text link
    Current approaches to building general-purpose AI systems tend to produce systems with both beneficial and harmful capabilities. Further progress in AI development could lead to capabilities that pose extreme risks, such as offensive cyber capabilities or strong manipulation skills. We explain why model evaluation is critical for addressing extreme risks. Developers must be able to identify dangerous capabilities (through "dangerous capability evaluations") and the propensity of models to apply their capabilities for harm (through "alignment evaluations"). These evaluations will become critical for keeping policymakers and other stakeholders informed, and for making responsible decisions about model training, deployment, and security

    Frontier AI Regulation: Managing Emerging Risks to Public Safety

    Full text link
    Advanced AI models hold the promise of tremendous benefits for humanity, but society needs to proactively manage the accompanying risks. In this paper, we focus on what we term "frontier AI" models: highly capable foundation models that could possess dangerous capabilities sufficient to pose severe risks to public safety. Frontier AI models pose a distinct regulatory challenge: dangerous capabilities can arise unexpectedly; it is difficult to robustly prevent a deployed model from being misused; and, it is difficult to stop a model's capabilities from proliferating broadly. To address these challenges, at least three building blocks for the regulation of frontier models are needed: (1) standard-setting processes to identify appropriate requirements for frontier AI developers, (2) registration and reporting requirements to provide regulators with visibility into frontier AI development processes, and (3) mechanisms to ensure compliance with safety standards for the development and deployment of frontier AI models. Industry self-regulation is an important first step. However, wider societal discussions and government intervention will be needed to create standards and to ensure compliance with them. We consider several options to this end, including granting enforcement powers to supervisory authorities and licensure regimes for frontier AI models. Finally, we propose an initial set of safety standards. These include conducting pre-deployment risk assessments; external scrutiny of model behavior; using risk assessments to inform deployment decisions; and monitoring and responding to new information about model capabilities and uses post-deployment. We hope this discussion contributes to the broader conversation on how to balance public safety risks and innovation benefits from advances at the frontier of AI development.Comment: Update July 11th: - Added missing footnote back in. - Adjusted author order (mistakenly non-alphabetical among the first 6 authors) and adjusted affiliations (Jess Whittlestone's affiliation was mistagged and Gillian Hadfield had SRI added to her affiliations) Updated September 4th: Various typo
    corecore