1,631 research outputs found

    The Law and Big Data

    Get PDF
    In this Article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and personalized law, in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data. Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this Article cautions against using Big Data to replace independent legal judgmen

    The Law and Big Data

    Get PDF
    In this Article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and personalized law, in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data. Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this Article cautions against using Big Data to replace independent legal judgmen

    ISIPTA'07: Proceedings of the Fifth International Symposium on Imprecise Probability: Theories and Applications

    Get PDF
    B

    Beyond epistemic democracy: the identification and pooling of information by groups of political agents.

    Get PDF
    This thesis addresses the mechanisms by which groups of agents can track the truth, particularly in political situations. I argue that the mechanisms which allow groups of agents to track the truth operate in two stages: firstly, there are search procedures; and secondly, there are aggregation procedures. Search procedures and aggregation procedures work in concert. The search procedures allow agents to extract information from the environment. At the conclusion of a search procedure the information will be dispersed among different agents in the group. Aggregation procedures, such as majority rule, expert dictatorship and negative reliability unanimity rule, then pool these pieces of information into a social choice. The institutional features of both search procedures and aggregation procedures account for the ability of groups to track the truth and amount to social epistemic mechanisms. Large numbers of agents are crucial for the epistemic capacities of both search procedures and aggregation procedures. This thesis makes two main contributions to the literature on social epistemology and epistemic democracy. Firstly, most current accounts focus on the Condorcet Jury Theorem and its extensions as the relevant epistemic mechanism that can operate in groups of political agents. The introduction of search procedures to epistemic democracy is (mostly) new. Secondly, the thesis introduces a two-stage framework to the process of group truth-tracking. In 4 addition to showing how the two procedures of search and aggregation can operate in concert, the framework highlights the complexity of social choice situations. Careful consideration of different types of social choice situation shows that different aggregation procedures will be optimal truth-trackers in different situations. Importantly, there will be some situations in which aggregation procedures other than majority rule will be best at tracking the truth

    X-Risk Analysis for AI Research

    Full text link
    Artificial intelligence (AI) has the potential to greatly improve society, but as with any powerful technology, it comes with heightened risks and responsibilities. Current AI research lacks a systematic discussion of how to manage long-tail risks from AI systems, including speculative long-term risks. Keeping in mind the potential benefits of AI, there is some concern that building ever more intelligent and powerful AI systems could eventually result in systems that are more powerful than us; some say this is like playing with fire and speculate that this could create existential risks (x-risks). To add precision and ground these discussions, we provide a guide for how to analyze AI x-risk, which consists of three parts: First, we review how systems can be made safer today, drawing on time-tested concepts from hazard analysis and systems safety that have been designed to steer large processes in safer directions. Next, we discuss strategies for having long-term impacts on the safety of future systems. Finally, we discuss a crucial concept in making AI systems safer by improving the balance between safety and general capabilities. We hope this document and the presented concepts and tools serve as a useful guide for understanding how to analyze AI x-risk

    Auditing Symposium XIII: Proceedings of the 1996 Deloitte & Touche/University of Kansas Symposium on Auditing Problems

    Get PDF
    Meeting the challenge of technological change -- A standard setter\u27s perspective / James M. Sylph, Gregory P. Shields; Technological change -- A glass half empty or a glass half full: Discussion of Meeting the challenge of technological change, and Business and auditing impacts of new technologies / Urton Anderson; Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Richard Lea; Model of errors and irregularities as a general framework for risk-based audit planning / Jere R. Francis, Richard A. Grimlund; Discussion of A Model of errors and irregularities as a general framework for risk-based audit planning / Timothy B. Bell; Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / Karla M. Johnstone, Stanley F. Biggs, Jean C. Bedard; Discussant\u27s comments on Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / David Plumlee; Implementation and acceptance of expert systems by auditors / Maureen McGowan; Discussion of Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Katherine Schipper; CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Miklos A. Vasarhelyi; Discussant comments on The CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Eric Denna; Digital analysis and the reduction of auditor litigation risk / Mark Nigrini; Discussion of Digital analysis and the reduction of auditor litigation risk / James E. Searing; Institute of Internal Auditors: Business and auditing impacts of new technologies / Charles H. Le Grandhttps://egrove.olemiss.edu/dl_proceedings/1012/thumbnail.jp

    2019 Faculty Accomplishments Reception

    Get PDF
    Program for the 2019 Faculty Accomplishments ReceptionIn Honor of University of Richmond Faculty Contributions to Scholarship, Research and Creative Work, January 2018 - December 2018April 5, 2019, 3:30 - 5:00 p.m.Boatwright Memorial Library, Research & Collaborative Study Area, First Floor,https://scholarship.richmond.edu/far-programs/1000/thumbnail.jp

    Consensus and disagreement in small committees

    Get PDF

    A Non-Ideal Epistemology of Disagreement: Pragmatism and the Need for Democratic Inquiry

    Get PDF
    The aim of this thesis is to provide a non-ideal epistemic account of disagreement, one which explains how epistemic agents can find a rational resolution to disagreement in actual epistemic practice. To do this, this thesis will compare two non-ideal epistemic accounts of disagreement which have been proposed within the contemporary philosophical literature. The first is the evidentialist response to disagreement given within the recent literature on the analytic epistemology of disagreement. According to the evidentialist response to disagreement, an epistemic agent can rationally respond to disagreement by evaluating other epistemic agents as higher-order evidence, and adjusting one's belief accordingly. The second is the pragmatist response to disagreement given within the recent literature on the intersection between American pragmatism and democratic theory. According to the pragmatist response to disagreement, a collective group of epistemic agents can come to a rational resolution of disagreement through a process of social inquiry where epistemic agents cooperatively exchange ideas, reasons, and objections, and collectively form plans of action which settle collective belief. This thesis will critically examine both of these accounts, and explain how the pragmatist response to disagreement provides a better account of both the epistemic challenges which disagreement poses, and the method in which epistemic agent can come to rationally resolve disagreement in actual epistemic practice
    corecore