2,871 research outputs found

    Graphical models for marked point processes based on local independence

    Full text link
    A new class of graphical models capturing the dependence structure of events that occur in time is proposed. The graphs represent so-called local independences, meaning that the intensities of certain types of events are independent of some (but not necessarily all) events in the past. This dynamic concept of independence is asymmetric, similar to Granger non-causality, so that the corresponding local independence graphs differ considerably from classical graphical models. Hence a new notion of graph separation, called delta-separation, is introduced and implications for the underlying model as well as for likelihood inference are explored. Benefits regarding facilitation of reasoning about and understanding of dynamic dependencies as well as computational simplifications are discussed.Comment: To appear in the Journal of the Royal Statistical Society Series

    Chief Justice Robots

    Get PDF
    Say an AI program someday passes a Turing test, because it can con-verse in a way indistinguishable from a human. And say that its develop-ers can then teach it to converse—and even present an extended persua-sive argument—in a way indistinguishable from the sort of human we call a “lawyer.” The program could thus become an AI brief-writer, ca-pable of regularly winning brief-writing competitions against human lawyers. Once that happens (if it ever happens), this Essay argues, the same technology can be used to create AI judges, judges that we should accept as no less reliable (and more cost-effective) than human judges. If the software can create persuasive opinions, capable of regularly winning opinion-writing competitions against human judges—and if it can be adequately protected against hacking and similar attacks—we should in principle accept it as a judge, even if the opinions do not stem from human judgment

    Performance of the ATLAS Liquid Argon Calorimeter after three years of LHC operation and plans for a future upgrade

    Full text link
    The ATLAS experiment is designed to study the proton-proton collisions produced at the Large Hadron Collider (LHC) at CERN. Liquid argon sampling calorimeters are used for all electromagnetic calorimetry as well as hadronic calorimetry in the endcaps. After installation in 2004--2006, the calorimeters were extensively commissioned over the three--year period prior to first collisions in 2009, using cosmic rays and single LHC beams. Since then, approximately 27 fb−1\mathbf{^{-1}} of data have been collected at an unprecedented center of mass energy. During all these stages, the calorimeter and its electronics have been operating almost optimally, with a performance very close to specifications. This paper covers all aspects of these first years of operation. The excellent performance achieved is especially presented in the context of the discovery of the elusive Higgs boson. The future plans to preserve this performance until the end of the LHC program are also presented.Comment: 12 pages, 25 figures, Proceedings of talk presented in "Advancements in Nuclear Instrumentation Measurement Methods and their Applications", Marseille, 201

    Professional Wrestling and Contemporary Photography: The Case of Dulce Pinzón’s The Real Story of the Superheroes

    Get PDF
    In response to popular attention drawn to September 11, 2001 heroes, Mexican photographer Dulce Pinzón created the photographic series “The Real Story of the Superheroes” to draw attention to what she considered heroic exploits performed by Mexican immigrants on a daily basis in New York City. Her images draw heavily upon superhero/pro-wrestling codes to impart meaning. Through analysis of the photographer’s objective and photographs, this paper demonstrates that the correlation between the two cultural forms convolutes understanding and re-emphasizes the polysemy of photography and the difficulty (despite clear anchorage) to impose a specific understanding – that of Mexican immigrants as heroes

    Extending the Exposure Score of Web Browsers by Incorporating CVSS

    Get PDF
    When browsing the Internet, HTTP headers enable both clients and servers send extra data in their requests or responses such as the User-Agent string. This string contains information related to the sender’s device, browser, and operating system. Yet its content differs from one browser to another. Despite the privacy and security risks of User-Agent strings, very few works have tackled this problem. Our previous work proposed giving Internet browsers exposure relative scores to aid users to choose less intrusive ones. Thus, the objective of this work is to extend our previous work through: first, conducting a user study to identify its limitations. Second, extending the exposure score via incorporating data from the NVD. Third, providing a full implementation, instead of a limited prototype. The proposed system: assigns scores to users’ browsers upon visiting our website. It also suggests alternative safe browsers, and finally it allows updating the back-end database with a click of a button. We applied our method to a data set of more than 52 thousand unique browsers. Our performance and validation analysis show that our solution is accurate and efficient. The source code and data set are publicly available here [4].</p

    Accelerating Parallel Verification via Complementary Property Partitioning and Strategy Exploration

    Get PDF
    Industrial hardware verification tasks often require checking a large number of properties within a testbench. Verification tools often utilize parallelism in their solving orchestration to improve scalability, either in portfolio mode where different solver strategies run concurrently, or in partitioning mode where disjoint property subsets are verified independently. While most tools focus solely upon reducing end-to-end walltime, reducing overall CPU-time is a comparably-important goal influencing power consumption, competition for available machines, and IT costs. Portfolio approaches often degrade into highly-redundant work across processes, where similar strategies address properties in nearly-identical order. Partitioning should take property affinity into account, atomically verifying high-affinity properties to minimize redundant work of applying identical strategies on individual properties with nearly-identical logic cones. In this paper, we improve multi-property parallel verification with respect to both wall- and CPU-time. We extend affinity-based partitioning to guarantee complete utilization of available processes, with provable partition quality. We propose methods to minimize redundant computation, and dynamically optimize work distribution. We deploy our techniques in a sequential redundancy removal framework, using localization to solve non-inductive properties. Our techniques offer a median 2.4× speedup yielding 18.1% more property solves, as demonstrated by extensive experiments

    Genome-Wide Identification of Human Functional DNA Using a Neutral Indel Model

    Get PDF
    It has become clear that a large proportion of functional DNA in the human genome does not code for protein. Identification of this non-coding functional sequence using comparative approaches is proving difficult and has previously been thought to require deep sequencing of multiple vertebrates. Here we introduce a new model and comparative method that, instead of nucleotide substitutions, uses the evolutionary imprint of insertions and deletions (indels) to infer the past consequences of selection. The model predicts the distribution of indels under neutrality, and shows an excellent fit to human–mouse ancestral repeat data. Across the genome, many unusually long ungapped regions are detected that are unaccounted for by the neutral model, and which we predict to be highly enriched in functional DNA that has been subject to purifying selection with respect to indels. We use the model to determine the proportion under indel-purifying selection to be between 2.56% and 3.25% of human euchromatin. Since annotated protein-coding genes comprise only 1.2% of euchromatin, these results lend further weight to the proposition that more than half the functional complement of the human genome is non-protein-coding. The method is surprisingly powerful at identifying selected sequence using only two or three mammalian genomes. Applying the method to the human, mouse, and dog genomes, we identify 90 Mb of human sequence under indel-purifying selection, at a predicted 10% false-discovery rate and 75% sensitivity. As expected, most of the identified sequence represents unannotated material, while the recovered proportions of known protein-coding and microRNA genes closely match the predicted sensitivity of the method. The method's high sensitivity to functional sequence such as microRNAs suggest that as yet unannotated microRNA genes are enriched among the sequences identified. Futhermore, its independence of substitutions allowed us to identify sequence that has been subject to heterogeneous selection, that is, sequence subject to both positive selection with respect to substitutions and purifying selection with respect to indels. The ability to identify elements under heterogeneous selection enables, for the first time, the genome-wide investigation of positive selection on functional elements other than protein-coding genes
    • 

    corecore