12 research outputs found

    Philosophy of Computational Social Science

    Get PDF
    Computational social science is an emerging field at the intersection of statistics, computer science, and the social sciences. This paper addresses the philosophical foundations of this new field. Kant and Peirce provide an understanding of scientific objectivity as intersubjective validity. Modern mathematics, and especially the mathematics of algorithms and statistics, get their objectivity from the intersubjective validity of formal proof. Algorithms implementing statistical inference, or scientific algorithms, are what distinguish computational social science epistemically from other social sciences. This gives computational social science an objective validity that other social sciences do not have. Objections to the scientific realism of this philosophy from the positions of anti-instrumentalism, postmodern interpretivism, and situated epistemology are considered and either incorporated into this philosophy of computational social science or refuted. Speculative predictions for the field of computational social science are offered in conclusion: computational social science will bring about an end of narrative in the social sciences, contract the field of social scientific knowledge into a narrower, more hierarchical field of expertise, and create a democratic crisis that will only be resolved through universal education in computational statistics

    Designing Fiduciary Artificial Intelligence

    Full text link
    A fiduciary is a trusted agent that has the legal duty to act with loyalty and care towards a principal that employs them. When fiduciary organizations interact with users through a digital interface, or otherwise automate their operations with artificial intelligence, they will need to design these AI systems to be compliant with their duties. This article synthesizes recent work in computer science and law to develop a procedure for designing and auditing Fiduciary AI. The designer of a Fiduciary AI should understand the context of the system, identify its principals, and assess the best interests of those principals. Then the designer must be loyal with respect to those interests, and careful in an contextually appropriate way. We connect the steps in this procedure to dimensions of Trustworthy AI, such as privacy and alignment. Fiduciary AI is a promising means to address the incompleteness of data subject's consent when interacting with complex technical systems

    Agent-Based Modeling as a Legal Theory Tool

    Get PDF
    Agent-based modeling (ABM) is a versatile social scientific research tool that adapts insights from sociology and physics to study complex social systems. Currently, ABM is nearly absent from legal literature that evaluates and proposes laws and regulations to achieve various social goals. Rather, quantitative legal scholarship is currently most characterized by the Law and Economics (L&E) approach, which relies on a more limited modeling framework. The time is ripe for more use of ABM in this scholarship. Recent developments in legal theory have highlighted the complexity of society and law’s structural and systemic effects on it. ABM’s wide adoption as a method in the social sciences, including recently in economics, demonstrates its ability to address precisely these regulatory design issues

    Context, Causality, and Information Flow: Implications for Privacy Engineering, Security, and Data Economics

    No full text
    The creators of technical infrastructure are under social and legal pressure to comply with expectations that can be difficult to translate into computational and business logics. This dissertation bridges this gap through three projects that focus on privacy engineering, information security, and data economics, respectively. These projects culminate in a new formal method for evaluating the strategic and tactical value of data: data games. This method relies on a core theoretical contribution building on the work of Shannon, Dretske, Pearl, Koller, and Nissenbaum: a definition of situated information flow as causal flow in the context of other causal relations and strategic choices. The first project studies privacy engineering's use of Contextual Integrity theory (CI), which defines privacy as appropriate information flow according to norms specific to social contexts or spheres. Computer scientists using CI have innovated as they have implemented the theory and blended it with other traditions, such as context-aware computing. This survey examines computer science literature using Contextual Integrity and discovers, among other results, that technical and social platforms that span social contexts challenge CI's current commitment to normative social spheres. Sociotechnical situations can and do defy social expectations with cross-context clashes, and privacy engineering needs its normative theories to acknowledge and address this fact.  This concern inspires the second project, which addresses the problem of building computational systems that comply with data flow and security restrictions such as those required by law. Many privacy and data protection policies stipulate restrictions on the flow of information based on that information's original source. We formalize this concept of privacy as Origin Privacy. This formalization shows how information flow security can be represented using causal modeling. Causal modeling of information security leads to general theorems about the limits of privacy by design as well as a shared language for representing specific privacy concepts such as noninterference, differential privacy, and authorized disclosure. The third project uses the causal modeling of information flow to address gaps in current theory of data economics. Like CI, privacy economics has focused on individual economic contexts and so has been unable to comprehend an information economy that relies on the flow of information across contexts. Data games, an adaptation of Multi-Agent Influence Diagrams for mechanism design, are used to model the well known economic contexts of principal-agent contracts and price differentiation as well as new contexts such as personalized expert services and data reuse. This work reveals that information flows are not goods but rather strategic resources, and that trade in information therefore involves market externalities

    PyPI Packages Annotated

    No full text
    <p>PyPI Package data, as a directed graph, in .gexf format. Directed edges are software dependencies. Package metadata includes information about release dates and number of downloads.</p
    corecore