230,374 research outputs found

    Society-in-the-Loop: Programming the Algorithmic Social Contract

    Full text link
    Recent rapid advances in Artificial Intelligence (AI) and Machine Learning have raised many questions about the regulatory and governance mechanisms for autonomous machines. Many commentators, scholars, and policy-makers now call for ensuring that algorithms governing our lives are transparent, fair, and accountable. Here, I propose a conceptual framework for the regulation of AI and algorithmic systems. I argue that we need tools to program, debug and maintain an algorithmic social contract, a pact between various human stakeholders, mediated by machines. To achieve this, we can adapt the concept of human-in-the-loop (HITL) from the fields of modeling and simulation, and interactive machine learning. In particular, I propose an agenda I call society-in-the-loop (SITL), which combines the HITL control paradigm with mechanisms for negotiating the values of various stakeholders affected by AI systems, and monitoring compliance with the agreement. In short, `SITL = HITL + Social Contract.'Comment: (in press), Ethics of Information Technology, 201

    What Do Citizens Think of AI Adoption in Public Services? Exploratory Research on Citizen Attitudes through a Social Contract Lens

    Get PDF
    The adoption of Artificial Intelligence (AI) by the public sector has the potential to improve service delivery. However, the risks related to AI are significant and citizen concerns have halted several AI initiatives. In this paper we report findings from an empirical study on citizens´ attitudes towards AI use in public services in Norway. We found a generally positive attitude and identified three factors contributing to this: a) the high level of trust in government; b) the reassurance provided by having humans in the loop; c) the perceived transparency into processes, data used for AI models and models´ inner workings. We interpret these findings through the lens of social contract theory and show how the introduction of AI in public services is subject to the social contract power dynamics. Our study contributes to research by foregrounding the government-citizen relationship and has implications for public sector AI practice

    FROM COMMERCIAL AGREEMENTS TO THE SOCIAL CONTRACT: HUMAN-CENTERED AI GUIDELINES FOR PUBLIC SERVICES

    Get PDF
    Human-centered Artificial Intelligence (HCAI) is a term frequently used in the discourse on how to guide the development and deployment of AI in responsible and trustworthy ways. Major technology actors including Microsoft, Apple and Google are fostering their own AI ecosystems, also providing HCAI guidelines, which operationalize theoretical concepts to inform the practice of AI development. Yet, their commonality seems to be an orientation to commercial contexts. This paper focuses on AI for public services and on the special relationship between governmental organizations and the public. Approaching human-AI interaction through the lens of social contract theory we identify amendments to improve the suitability of an existing HCAI framework for the public sector. Following the Action Design Research methodological approach, we worked with a public organization to apply, assess, and adapt the “Google PAIR guidelines”, a well-known framework for human-centered AI development. The guidelines informed the design of an interactive prototype for AI in public services and through this process we revealed gaps and potential enhancements. Specifically, we found that it’s important to a) articulate a clear value proposition by weighing the public good vs. the individual benefit, b) define boundaries for repurposing public data given the relationship between citizens and their government, c) accommodate user group diversity by considering the different levels of technical and administrative literacy of citizens. We aim to shift the perspective within human-AI interaction, acknowledging that exchanges are not always subject to commercial agreements but can also be based on the mechanisms of a social contract

    Ethical AI at work: the social contract for Artificial Intelligence and its implications for the workplace psychological contract

    Get PDF
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, autonomy, justice, and explicability can feed into national- and organisational-level microsocial contracts. We also outline the role of employees’ technology frames in this process. We then use an illustrative example to demonstrate how this multilevel normative background helps to inform the content of individuals’ PCs in the context of working with AI technologies

    Ethical AI at Work: The Social Contract for Artificial Intelligence and Its Implications for the Workplace Psychological Contract

    Get PDF
    Artificially intelligent (AI) technologies are increasingly being used in many workplaces. It is recognised that there are ethical dimensions to the ways in which organisations implement AI alongside, or substituting for, their human workforces. How will these technologically driven disruptions impact the employee–employer exchange? We provide one way to explore this question by drawing on scholarship linking Integrative Social Contracts Theory (ISCT) to the psychological contract (PC). Using ISCT, we show that the macrosocial contract’s ethical AI norms of beneficence, non-maleficence, autonomy, justice, and explicability can feed into national- and organisational-level microsocial contracts. We also outline the role of employees’ technology frames in this process. We then use an illustrative example to demonstrate how this multilevel normative background helps to inform the content of individuals’ PCs in the context of working with AI technologies

    Recorded Work Meetings and Algorithmic Tools: Anticipated Boundary Turbulence

    Get PDF
    Meeting recordings and algorithmic tools that process and evaluate recorded meeting data may provide many new opportunities for employees, teams, and organizations. Yet, the use of this data raises important consent, data use, and privacy issues. The purpose of this research is to identify key tensions that should be addressed in organizational policymaking about data use from recorded work meetings. Based on interviews with 50 professionals in the United States, China, and Germany, we identify the following five key tensions (anticipated boundary turbulence) that should be addressed in a social contract approach to organizational policymaking for data use of recorded work meetings: disruption versus help in relationships, privacy versus transparency, employee control versus management control, learning versus evaluation, and trust in AI versus trust in people

    Attracting Commercial Artificial Intelligence Firms to Support National Security through Collaborative Contracts

    Get PDF
    The United States Department of Defense (‘DoD’) has determined it is not ready to compete in the Artificial Intelligence (‘AI’) era without significant changes to how it acquires AI. Unlike other military technologies driven by national security needs and developed with federal funding, this ubiquitous technology enabler is predominantly funded and advanced by commercial industry for civilian applications. However, there is a lack of understanding of the reasons commercial AI firms decide to work with the DoD or choose to abstain from the defence market. Although there are several challenges to attracting commercial AI firms to support national security, this thesis argues that the DoD’s contract law and procurement framework are among the most significant obstacles. This research indicates that the commercial AI industry actually views the DoD as an attractive customer. However, this attraction is despite the obstacles presented by traditional contract law and procurement practices used to solicit and award contracts. Drawing on social exchange theory, this thesis introduces a theoretical framework – ‘optimal buyer theory’ – to understand the factors that influence a commercial AI firm’s decision to engage with the DoD. It develops evidence-based best practices in contract law that reveal how the DoD can become a more attractive customer to commercial AI firms. This research builds upon research at the nexus of national security and defence contracts as it studies business decision-makers from AI firms through an explanatory sequential mixed methods design. In the study’s first phase, participants are surveyed to discover the perceptions, opinions, and preferences at AI firms of all sizes, maturity, location, and experience within the DoD marketplace. In the second phase of the study, interviews from a sample of the participants explain why the AI industry holds such perceptions, opinions, and preferences about contracts generally and the DoD, specifically, in its role as a customer. This thesis concludes that commercial AI firms are attracted to contracts that are consistent with their business and technology considerations. These considerations align with contractual relationships that are collaborative, flexible, negotiated, iterative, and awarded promptly as opposed to those with fixed requirements and driven by regulations foreign to the commercial market. Additionally, it develops best practices for leveraging existing contract law, primarily other transaction authority, to align the DoD’s contracting practices with commercial preferences and the machine learning development and deployment lifecycle. Armed with this understanding, the DoD can better attract commercial AI firms to support its national security objectives.Thesis (Ph.D.) -- University of Adelaide, Law School, 202

    Post transfer of undertakings psychological contract violation: modelling antecedents and outcomes

    Get PDF
    Msc Human Resource ManagementThe purpose of this study was to test a model of antecedents and outcomes of psychological contract violation based on social exchange theory within the context of an acquisition. A cross-sectional quantitative survey research design was used. A total of 200 office and operational employees who had recently gone through a TUPE transfer process as the result of an acquisition partiCipated in the study. PartiCipants were] asked to complete a questionnaire to measure their perceptions of procedural justice and perceived organisational support experienced at the point of TUPE and the resulting psychological contract violation and employee engagement post-TUPE. Multiple regression analysis through SPSS 19.0 was used as the method of analysis. Results indicate that procedural justice and perceived organisational support predicts psychological contract violation. Results indicate that psychological contract violation in turn predicts employee engagement. In addition, psychological contract violation mediates the relationship between procedural justice, perceived organisational support and employee engagement. Therefore, support has been found to state that the psychological contract can be used to explain the relationship between employee perceptions of fairness and support during a TUPE and their post-TUPE reaction of engagement. The study used cross-sectional and self-reported data which limits the conclusions that can be confirmed about causality and also raises concerns about common method bias. Furthermore, it is acknowledged that various extraneous or confounding variable may have an influence on the variables. The study offers insights into employees' responses within the context of TUPE transfers as explored through the psychological contract within the social exchange theory the framework
    • …
    corecore