65 research outputs found

    THE INSTITUTIONAL LOGICS UNDERPINNING ORGANIZATIONAL AI GOVERNANCE PRACTICES

    Get PDF
    Recent developments in artificial intelligence (AI) promise significant benefits but also invoke novel risks and harms to individuals, organizations, and societies. The rising role of AI necessitates effective AI governance. However, translating AI ethics principles into governance practices remains challenging. Our paper recasts the “AI ethics translation problem” from a unidirectional translation process to a bidirectional interaction between multiple institutional logics and organizational AI governance practices. We conduct a theory adaptation study using the AI governance translation problem as a domain theory and institutional logics and institutional pluralism as method theories. Using this framework, we synthesize key AI governance practices from the literature and outline four central institutional logics: AI ethics principlism, managerial rationalism, IT professionalism, and regulatory oversight. The institutional logics and AI governance practices reciprocally influence one another: logics justify practices, and practices enact logics. We provide an illustrative analysis of the ChatGPT chatbot to demonstrate the framework. For future research, our conceptual study lays a framework for studying how plural institutional logics drive AI governance practices and how practices can be used to negotiate conflicting and complementary institutional logics

    A breathless race for breathing space : Critical-analytical futures studies and the contested co-evolution of privacy imaginaries and institutions

    Get PDF
    Seeds for countless alternative futures already exist in anticipatory imaginaries and projects, and in possibilities for action. The novel approach of critical-analytical futures studies enables systematically studying anticipatory future-making processes and possibilities for agency. Critical-analytical futures studies develops the tradition of critical futures studies by incorporating an understanding of historical processes, causal mechanisms and negotiation among actors with future-oriented projects. Privacy in the digital age seems to be simultaneously a grand challenge and a relatively minor issue. Currently actors are breathlessly racing to ensure and define breathing space. In other words, they debate the meanings of privacy in a context where datafication seriously undermines privacy. This dissertation investigates the anticipatory co-evolution of imaginaries and institutions in making futures of privacy in Europe. Privacy protection is defined as a social institution at the intersection of three types of anticipatory practices: anticipatory institutional change, surveillance practices and anticipation in everyday life. By regulating surveillance, privacy rules maintain a societal future orientation that leaves space for creativity, imagination and human agency. The analytical framework is operationalised through four stages for qualitatively studying anticipatory institutional change: 1) historical context, 2) investigation of actor storylines, 3) analysis of deeper imaginaries, and 4) identification of latent future possibilities. This approach, developed in this dissertation, is termed CASIL (context, actor storylines, imaginaries and latents). The five original studies develop different aspects of the four methodological stages. The overall temporal landscape features two competing imaginaries, continued growth and tragic loss. Decision-makers in the European Union are navigating between these imaginaries and trying to maintain a positive role for Europe. The discussion section identifies numerous latent possibilities for promoting a systemic understanding of privacy as ‘breathing space for futures’. However, there is a strategic tradeoff for privacy advocates between increasing the regulation of surveillance practices and taming the roots of surveillance.Lukemattomien tulevaisuuksien siemenet ovat olemassa imaginaareissa ja toiminnan mahdollisuuksissa. Uusi kriittis-analyyttisen tulevaisuuksientutkimuksen lĂ€hestymistapa mahdollistaa tulevaisuuden tekemisen prosessien ja toimijuuden mahdollisuuksien tutkimisen. Kriittis-analyyttisessĂ€ tulevaisuuksientutkimuksessa kehitetÀÀn kriittistĂ€ tulevaisuuksientutkimusta tutkimalla historiallisia prosesseja, kausaalisia mekanismeja sekĂ€ toimijoiden ja projektien vĂ€listĂ€ neuvottelua. Digitaalisella aikakaudella yksityisyys nĂ€yttĂ€ytyy samanaikaisesti suurena yhteiskunnallisena haasteena ja verrattain vĂ€hĂ€isenĂ€ kysymyksenĂ€. Toimijat pyrkivĂ€t hengĂ€styneesti mÀÀrittelemÀÀn hengitystilaa, eli he vĂ€ittelevĂ€t yksityisyyden merkityksistĂ€ tilanteessa, jossa dataistuminen heikentÀÀ yksityisyyden edellytyksiĂ€. VĂ€itöskirjassa tutkitaan imaginaarien ja instituutioiden koevoluutiota yksityisyyden tulevaisuuden tekemisprosessissa Euroopassa. Yksityisyyden suoja mÀÀritellÀÀn yhteiskunnalliseksi instituutioksi, joka on kolmenlaisten antisipatoristen kĂ€ytĂ€ntöjen vĂ€lissĂ€: antisipatorisen institutionaalisen muutoksen, valvontakĂ€ytĂ€ntöjen ja jokapĂ€ivĂ€isen ennakoinnin. YksityisyyssÀÀnnöt sÀÀtelevĂ€t valvontaa ja pitĂ€vĂ€t yllĂ€ luovuuden ja inhimillisen toimijuuden mahdollistavaa tulevaisuussuuntautumista. Antisipatorista institutionaalista muutosta tutkitaan laadullisesti nelivaiheisen kehikon avulla. Vaiheet ovat 1) historiallisen kontekstin huomioiminen, 2) toimijoiden tarinalinjojen tutkiminen, 3) taustalla olevien imaginaarien analysointi sekĂ€ 4) piilevien mahdollisuuksien tunnistaminen. VĂ€itöskirjan viisi artikkelia kĂ€sittelevĂ€t kehikon eri osia. Kokonaiskuvassa voidaan nĂ€hdĂ€ kaksi kilpailevaa imaginaaria: jatkuva kasvu ja traaginen menetys. Euroopan unionin pÀÀtöksentekijĂ€t navigoivat nĂ€iden imaginaarien vĂ€lillĂ€ ja yrittĂ€vĂ€t yllĂ€pitÀÀ Euroopan positiivista roolia. VĂ€itöskirjassa tunnistetaan piileviĂ€ mahdollisuuksia edistÀÀ systeemistĂ€ ymmĂ€rrystĂ€ yksityisyydestĂ€, jossa yksityisyys nĂ€hdÀÀn ”hengitystilana tulevaisuuksille”. Yksityisyyden puolestapuhujat ovat strategisen valintatilanteen edessĂ€, jossa toisella puolella on valvontakĂ€ytĂ€ntöjen sÀÀntelyn lisÀÀminen ja toisella puolella valvonnan juurien kesyttĂ€minen

    Responsible Artificial Intelligence Systems Critical considerations for business model design

    Get PDF
    Commercializing responsible artificial intelligence (RAI) involves translating ethical principles for developing, deploying, and using AI into business models. However, prior studies have reported tensions between commercial interests (e.g., development speed or accuracy) and societal interests (e.g., privacy or human rights) that can undermine RAI’s value proposition. Conceptually, we distinguish two business model development perspectives on AI and responsibility: innovating responsible business models leveraging AI and designing RAI business models. Taking the second perspective, we investigate the value proposition of RAI through business model design by employing a two-stage research approach consisting of focus groups and member checking. Empirically, we present the learnings from identifying the design elements for RAI business models. These include two themes that can underlie such business models: providing vs. enabling RAI systems and the observation that the tensions in RAI’s value proposition are paradoxical, not dilemmas. With our conceptual groundwork and empirical insights, we make three contributions that offer critical considerations for RAI business model design. First, we conceptualize two pathways for designing RAI business models: a corner path to commercialized RAI systems vs. direct path to commercialized RAI systems. We argue that these paths have distinct implications for the responsible in RAI. Second, we reflect the sociotechnical nature of RAI systems by emphasizing the criticality of the social for responsibility. Third, we outline a research agenda for developing RAI business models

    Machine Learning System Development in Information Systems Development Praxis

    Get PDF
    Advancements in hardware and software have propelled machine learning (ML) solutions to become vital components of numerous information systems. This calls for research on the integration and evaluation of ML development practices within software companies. To investigate these issues, we conducted expert interviews with software and ML professionals. We structured the interviews around information systems development (ISD) models, which serve as conceptual frameworks that guide stakeholders throughout software projects. Using practice theory, we analyzed how software professionals perceive ML development within the context of ISD models and identified themes that characterize the transformative impact of ML development on these conceptual models. Our findings show that developer-driven conceptual models, such as DevOps and MLOps, have been embraced as common frameworks for developers and management to understand and guide the ML development processes. We observed ongoing shifts in predefined developer roles, wherein developers are increasingly adopting ML techniques and tools in their professional work. Overall, our findings underscore that ML technologies are becoming increasingly prominent in software projects across industries, and that the incorporation of ML development in ISD models is an ongoing, largely practice-driven, process

    The anatomy of plausible futures in policy processes: Comparing the cases of data protection and comprehensive security

    Get PDF
    Due to rapid change and wicked policy problems, anticipatory policymaking is increasingly important. In addition to methods for producing foresight knowledge, tools are needed to make sense of the increasing amounts of future-oriented argumentation. This article presents a comparative analysis of anticipatory argumentation in two fields: the EU data protection reform and the Finnish concept for comprehensive security. A three-layer heuristic framework is presented for qualitative analysis of statements on plausible futures. The first layer consists of specific expectations regarding the future. The second layer is the generic anticipatory storyline. The third layer consists of the underlying futures consciousness. The data protection case presents an institutional reform narrative with short time perspective and relatively high agency, while the comprehensive security case presents a crisis narrative based on a contingency planning orientation with long time perspective, relatively developed systems perception and relatively low agency. In policy foresight with high uncertainty and high aspirations of agency, reflexivity and ethical responsibility are crucial components of foresight. This article promotes these by providing a tool for structuring anticipatory assumptions. The tool can be used for studying policy documents or during the policy process to craft more rigorous future-oriented policies.</p

    Futures of privacy protection – A framework for creating scenarios of institutional change

    Get PDF
    The future of privacy is a topical issue in the context of debates on mass surveillance and the increasing prevalence of social media sites in everyday life. Previous scenario studies on privacy have focused on macro trends and on forecasting technological developments, and claims about causal influences have remained implicit. This article presents an alternative approach for constructing scenarios of privacy protection. The article focuses on privacy protection as a social institution and builds on the theory of gradual institutional change. The article presents a scenario framework which includes three stages: (1) outlining the dynamics of privacy protection, (2) tracing historical processes and constructing a causal narrative, and (3) creating event-based scenarios. The resulting scenarios are narratives of plausible chains of events which are based on the results of the previous stages. The key difference to typical scenario approaches is the focus on specific actors and types of event sequences in privacy protection. The argument is that by lowering the level of abstraction in this way, researchers and decision-makers can gain a more profound understanding of possible future challenges in privacy protection and of key leverage points in the institutional change process.</p

    Roadmap to competitive and socially responsible artificial intelligence

    Get PDF
    The roadmap to competitive and socially responsible artificial intelligence (AI) offers an overview of AI governance drivers and tasks. It is intended for organizations using or planning to use information systems that include AI functionalities, such as machine learning, natural language processing, and computer vision. Responsible AI is still an emerging topic, but legal and stakeholder requirements for AI systems to comply with societally agreed standards are growing. In particular, the European Union’s proposed Artificial Intelligence Act is set to introduce new rules for AI systems used in high-risk application domains. However, beyond binding legislation, soft governance, such as guidelines and ethics principles, already seeks to differentiate between socially responsible and irresponsible AI development and use practices. The roadmap report begins by laying out its target group, instructions, and structure and then moves on to definitions. Next, we introduce the institutionalization of AI as a necessary background to the consideration of AI governance. The main roadmap section includes a visual representation and explanation of the six key drivers of competitive and socially responsible AI: 1) Movement from AI ethics principles to AI governance 2) Responsible AI commercialization potential and challenges 3) AI standardization 4) Automation of AI governance 5) Responsible AI business ecosystems 6) Stakeholder pressure for responsible AI The roadmap is followed by a future research agenda highlighting five emerging research areas: 1) operational governance mechanisms for complex AI systems, 2) connections to corporate sustainability, 3) automation of AI governance, 4) future of responsible AI ecosystems, and 5) sociotechnical activities to implement responsible AI. Researchers and research funding bodies play a key role in advancing competitive and socially responsible AI by deepening these knowledge areas. Advancing socially responsible AI is important because the benefits of AI technologies can be reaped only if organizations and individuals can trust the technologies to operate fairly, transparently, and according to socially defined rules. This roadmap was developed by the Artificial Intelligence Governance and Auditing (AIGA) co-innovation project funded by Business Finland during the years 2020 to 2022. The roadmap was cocreated by researchers, company practitioners, and other AIGA project stakeholders

    Aware of the future? Adaptation and refinement of the Futures Consciousness Scale

    Get PDF
    Introduction: Futures consciousness (FC) refers to the capacity that a person has for understanding, anticipating, and preparing for the future. A psychometric instrument, the FC scale, was recently developed to measure FC as an interindividual difference. However, this initial scale suffered from some shortcomings due to a few underperforming items. Objectives: In this paper, we present and validate the revised FC scale, which aims to address these shortcomings. Methods and Results: Data from a representative sample of N = 1,684 British participants demonstrated good psychometric properties of the revised scale (and better than the original) as well as good predictive validity. Specifically, individuals' scores were positively related to self-reported future-oriented behavior, such as engagement in civic collective action and general engagement in politics. The five-dimensional structure of the scale was also replicated. Conclusion: The revised FC scale proves a reliable tool that can be used by both researchers and practitioners

    Designing an AI governance framework: From research-based premises to meta-requirements

    Get PDF
    The development and increasing use of artificial intelligence (AI), particularly in high-risk application areas, calls for attention to the governance of AI systems. Organizations and researchers have proposed AI ethics principles, but translating principles into practice-oriented frameworks has proven difficult. This paper develops meta-requirements for organizational AI governance frameworks to help translate ethical AI principles into practice and align operations with the forthcoming European AI Act. We adopt a design science research approach. We put forward research-based premises, then we report the design method employed in an industry-academia research project. Based on these, we present seven meta-requirements for AI governance frameworks. The paper contributes to the IS research on AI governance by collating knowledge into meta-requirements and advancing a design approach to AI governance. The study underscores that governance frameworks need to incorporate the characteristics of AI, its contexts, and the different sources of requirements

    What about investors? ESG analyses as tools for ethics-based AI auditing

    Get PDF
    Artificial intelligence (AI) governance and auditing promise to bridge the gap between AI ethics principles and the responsible use of AI systems, but they require assessment mechanisms and metrics. Effective AI governance is not only about legal compliance; organizations can strive to go beyond legal requirements by proactively considering the risks inherent in their AI systems. In the past decade, investors have become increasingly active in advancing corporate social responsibility and sustainability practices. Including nonfinancial information related to environmental, social, and governance (ESG) issues in investment analyses has become mainstream practice among investors. However, the AI auditing literature is mostly silent on the role of investors. The current study addresses two research questions: (1) how companies' responsible use of AI is included in ESG investment analyses and (2) what connections can be found between principles of responsible AI and ESG ranking criteria. We conducted a series of expert interviews and analyzed the data using thematic analysis. Awareness of AI issues, measuring AI impacts, and governing AI processes emerged as the three main themes in the analysis. The findings indicate that AI is still a relatively unknown topic for investors, and taking the responsible use of AI into account in ESG analyses is not an established practice. However, AI is recognized as a potentially material issue for various industries and companies, indicating that its incorporation into ESG evaluations may be justified. There is a need for standardized metrics for AI responsibility, while critical bottlenecks and asymmetrical knowledge relations must be tackled
    • 

    corecore