1,631 research outputs found
The Law and Big Data
In this Article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and personalized law, in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data.
Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this Article cautions against using Big Data to replace independent legal judgmen
The Law and Big Data
In this Article we critically examine the use of Big Data in the legal system. Big Data is driving a trend towards behavioral optimization and personalized law, in which legal decisions and rules are optimized for best outcomes and where law is tailored to individual consumers based on analysis of past data. Big Data, however, has serious limitations and dangers when applied in the legal context. Advocates of Big Data make theoretically problematic assumptions about the objectivity of data and scientific observation. Law is always theory-laden. Although Big Data strives to be objective, law and data have multiple possible meanings and uses and thus require theory and interpretation in order to be applied. Further, the meanings and uses of law and data are indefinite and continually evolving in ways that cannot be captured or predicted by Big Data.
Due to these limitations, the use of Big Data will likely generate unintended consequences in the legal system. Large-scale use of Big Data will create distortions that adversely influence legal decision-making, causing irrational herding behaviors in the law. The centralized nature of the collection and application of Big Data also poses serious threats to legal evolution and democratic accountability. Furthermore, its focus on behavioral optimization necessarily restricts and even eliminates the local variation and heterogeneity that makes the legal system adaptive. In all, though Big Data has legitimate uses, this Article cautions against using Big Data to replace independent legal judgmen
Beyond epistemic democracy: the identification and pooling of information by groups of political agents.
This thesis addresses the mechanisms by which groups of agents can track the
truth, particularly in political situations.
I argue that the mechanisms which allow groups of agents to track the truth
operate in two stages: firstly, there are search procedures; and secondly, there
are aggregation procedures. Search procedures and aggregation procedures
work in concert. The search procedures allow agents to extract information
from the environment. At the conclusion of a search procedure the information
will be dispersed among different agents in the group. Aggregation procedures,
such as majority rule, expert dictatorship and negative reliability unanimity rule,
then pool these pieces of information into a social choice.
The institutional features of both search procedures and aggregation procedures
account for the ability of groups to track the truth and amount to social
epistemic mechanisms. Large numbers of agents are crucial for the epistemic
capacities of both search procedures and aggregation procedures.
This thesis makes two main contributions to the literature on social
epistemology and epistemic democracy. Firstly, most current accounts focus on
the Condorcet Jury Theorem and its extensions as the relevant epistemic
mechanism that can operate in groups of political agents. The introduction of
search procedures to epistemic democracy is (mostly) new. Secondly, the thesis
introduces a two-stage framework to the process of group truth-tracking. In 4
addition to showing how the two procedures of search and aggregation can
operate in concert, the framework highlights the complexity of social choice
situations. Careful consideration of different types of social choice situation
shows that different aggregation procedures will be optimal truth-trackers in
different situations. Importantly, there will be some situations in which
aggregation procedures other than majority rule will be best at tracking the
truth
X-Risk Analysis for AI Research
Artificial intelligence (AI) has the potential to greatly improve society,
but as with any powerful technology, it comes with heightened risks and
responsibilities. Current AI research lacks a systematic discussion of how to
manage long-tail risks from AI systems, including speculative long-term risks.
Keeping in mind the potential benefits of AI, there is some concern that
building ever more intelligent and powerful AI systems could eventually result
in systems that are more powerful than us; some say this is like playing with
fire and speculate that this could create existential risks (x-risks). To add
precision and ground these discussions, we provide a guide for how to analyze
AI x-risk, which consists of three parts: First, we review how systems can be
made safer today, drawing on time-tested concepts from hazard analysis and
systems safety that have been designed to steer large processes in safer
directions. Next, we discuss strategies for having long-term impacts on the
safety of future systems. Finally, we discuss a crucial concept in making AI
systems safer by improving the balance between safety and general capabilities.
We hope this document and the presented concepts and tools serve as a useful
guide for understanding how to analyze AI x-risk
Auditing Symposium XIII: Proceedings of the 1996 Deloitte & Touche/University of Kansas Symposium on Auditing Problems
Meeting the challenge of technological change -- A standard setter\u27s perspective / James M. Sylph, Gregory P. Shields; Technological change -- A glass half empty or a glass half full: Discussion of Meeting the challenge of technological change, and Business and auditing impacts of new technologies / Urton Anderson; Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Richard Lea; Model of errors and irregularities as a general framework for risk-based audit planning / Jere R. Francis, Richard A. Grimlund; Discussion of A Model of errors and irregularities as a general framework for risk-based audit planning / Timothy B. Bell; Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / Karla M. Johnstone, Stanley F. Biggs, Jean C. Bedard; Discussant\u27s comments on Framing effects and output interference in a concurring partner review context: Theory and exploratory analysis / David Plumlee; Implementation and acceptance of expert systems by auditors / Maureen McGowan; Discussion of Opportunities for assurance services in the 21st century: A progress report of the Special Committee on Assurance Services / Katherine Schipper; CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Miklos A. Vasarhelyi; Discussant comments on The CPAS/CCM experiences: Perspectives for AI/ES research in accounting / Eric Denna; Digital analysis and the reduction of auditor litigation risk / Mark Nigrini; Discussion of Digital analysis and the reduction of auditor litigation risk / James E. Searing; Institute of Internal Auditors: Business and auditing impacts of new technologies / Charles H. Le Grandhttps://egrove.olemiss.edu/dl_proceedings/1012/thumbnail.jp
2019 Faculty Accomplishments Reception
Program for the 2019 Faculty Accomplishments ReceptionIn Honor of University of Richmond Faculty Contributions to Scholarship, Research and Creative Work, January 2018 - December 2018April 5, 2019, 3:30 - 5:00 p.m.Boatwright Memorial Library, Research & Collaborative Study Area, First Floor,https://scholarship.richmond.edu/far-programs/1000/thumbnail.jp
Recommended from our members
Experimental Democracy - Collective Intelligence for a Diverse and Complex World
My dissertation is motivated by the following observation: while we care very much about the outcomes of the democratic process, there is widespread uncertainty about ex ante how to produce them - and quite often there is also disagreement and uncertainty about what they are in the first place. Consequently, unless we have a definite idea what "better decision-making" might be, it is not obvious which institutional reforms or changes in democratic structures would actually promote it. Democracy is a wide concept, and not all institutional constellations and rules and regulations that can be called democratic function equally well. In this dissertation therefore I offer a specific model of democracy - "Experimental Democracy" - that unites the view that the quality of decisions matter, with taking into account the circumstances of uncertainty and disagreement that define political problems. On this account, a desirable political mechanism is one that realizes an experimental method of policy-making directed at solving problems, such that we can expect it to make progress over time, even though we cannot rule out that it will get things wrong - possibly even frequently. I also show how democracy may best realize such an experimental method, and which particular institutional features of democracy could serve this purpose. The argument in the dissertation proceeds as follows. In the first part I develop a theory of the justifiability of political authority in the sense outlined above: a theory that is sensitive to the outcome concerns that many people share, but recognizes the fundamental disagreement surrounding this question. I establish that instrumental considerations should be of crucial importance when we evaluate political authority. Here I argue against pure proceduralist theories that see the outcome dimension as secondary. However, the facts of disagreement and uncertainty about the ends of politics, as well as concrete policy, do seem to pose a problem for any instrumental justification. In response I outline a pragmatic or experimental theory of political authority, which focuses precisely on the capacity of a political procedure to solve political problems under uncertainty. Just as in many other fields of inquiry experimentation and adaptation are seen as the adequate responses to uncertainty, I argue, an experimental and adaptive mode of policy-making is the best response to political uncertainty. In the second part I answer the question which form of democracy would best realize the ideal of experimental policy-making. Subsequently, we should evaluate democratic institutions mainly by their capacity to enable successful experimentation and adaptation. Here, contrary to popular "wisdom of crowds" arguments, I argue that since no single decision procedure can be expected to be reliable across the board, a justified political system may have to employ a plurality of first-order decision-making mechanisms. However, as I show for this to work, these mechanisms must be subject to effective democratic control. The key function of democratic institutions here is that of feedback, in order to enable successful adaptation. Finally, I offer some concrete examples how the functional requirements of a successful experimental strategy of policy-making can be institutionally realized within democratic systems
A Non-Ideal Epistemology of Disagreement: Pragmatism and the Need for Democratic Inquiry
The aim of this thesis is to provide a non-ideal epistemic account of disagreement, one which explains how epistemic agents can find a rational resolution to disagreement in actual epistemic practice. To do this, this thesis will compare two non-ideal epistemic accounts of disagreement which have been proposed within the contemporary philosophical literature. The first is the evidentialist response to disagreement given within the recent literature on the analytic epistemology of disagreement. According to the evidentialist response to disagreement, an epistemic agent can rationally respond to disagreement by evaluating other epistemic agents as higher-order evidence, and adjusting one's belief accordingly. The second is the pragmatist response to disagreement given within the recent literature on the intersection between American pragmatism and democratic theory. According to the pragmatist response to disagreement, a collective group of epistemic agents can come to a rational resolution of disagreement through a process of social inquiry where epistemic agents cooperatively exchange ideas, reasons, and objections, and collectively form plans of action which settle collective belief. This thesis will critically examine both of these accounts, and explain how the pragmatist response to disagreement provides a better account of both the epistemic challenges which disagreement poses, and the method in which epistemic agent can come to rationally resolve disagreement in actual epistemic practice
- …