2 research outputs found

    AI for the Common Good?! Pitfalls, challenges, and Ethics Pen-Testing

    Full text link
    Recently, many AI researchers and practitioners have embarked on research visions that involve doing AI for "Good". This is part of a general drive towards infusing AI research and practice with ethical thinking. One frequent theme in current ethical guidelines is the requirement that AI be good for all, or: contribute to the Common Good. But what is the Common Good, and is it enough to want to be good? Via four lead questions, I will illustrate challenges and pitfalls when determining, from an AI point of view, what the Common Good is and how it can be enhanced by AI. The questions are: What is the problem / What is a problem?, Who defines the problem?, What is the role of knowledge?, and What are important side effects and dynamics? The illustration will use an example from the domain of "AI for Social Good", more specifically "Data Science for Social Good". Even if the importance of these questions may be known at an abstract level, they do not get asked sufficiently in practice, as shown by an exploratory study of 99 contributions to recent conferences in the field. Turning these challenges and pitfalls into a positive recommendation, as a conclusion I will draw on another characteristic of computer-science thinking and practice to make these impediments visible and attenuate them: "attacks" as a method for improving design. This results in the proposal of ethics pen-testing as a method for helping AI designs to better contribute to the Common Good.Comment: to appear in Paladyn. Journal of Behavioral Robotics; accepted on 27-10-201

    Social simulation for socio-ecological systems: An agent architecture for simulations of policy effects

    Get PDF
    Socio-ecological systems (SES) are complex system in which human society is deeply inter-twined with the natural world. Many of our most difficult contemporary problems arise in SES: overfishing, deforestation, damaging tourism, habitat destruction caused by urban and industrial developments, and, of course, climate change. The complexity of SES means that evidence-based policy isn’t always the best approach because the issues that arise in SES don’t typically have a single universally recognized solution. Computational models are one of the most useful tools for policy makers, with agent-based modeling (ABM) standing out as particularly well suited to studying policy effects in SES. However, ABM still struggles with a dearth of realistic social and decision models, which is particularly troublesome. if ABM is to be included in the policy design process for governing systems in which the human component is crucial to the functioning and behavior of the system. In such models, agents need to have a decision process that can operate with social norms and values, at least. Taken together, values and social norms form a particularly stable and consistent framework for the decision making processes of an agent, encompassing both motivations and preferred means of pursuing said motivations. Policy, as another kind of norm, fits in this framework as either supporting/reinforcing (when it promotes the values of an agent or works together with the social norms of an agent) or antagonistic/conflicting (when it goes against the values of an agent or conflicts with and the social norms of an agent). In this work, we present our agent architecture, built to account for human decision making in contexts where norms meet policy, while also remaining lightweight enough to be usable in ABM. As such, it provides explicit representations of the cognitive elements involved, and realistically replicates the normative deliberation process, while remaining scalable. We also present a modular implementation of the architecture, and a visual model builder that leverages the modularity of the implementation to allow for fast agent and simulation setup. Finally, we demonstrate the use of the architecture by simulating a number of scenarios derived from a real-world instance of fishing policy and its effects. The scenarios cover different assumptions about the reasoning and motivations of the agents (profit seeking goal-oriented, normative goal-oriented, value driven self-interested, value-driven community-oriented) and their response to the same policy being introduced during times of abundance or scarcity of resource
    corecore