102 research outputs found
Maintaining Legitimacy: Controversies, Orders of Worth and Public Justifications
We build on Boltanski and Thévenot's theory of justification to account for the ways in which different stakeholder groups actively engage with discourses and objects to maintain the legitimacy of institutions that are relevant to their activity. We use this framework to analyse a controversy emerging from a nuclear accident which involved a large European energy company and sparked public debate on the legitimacy of nuclear power. Based on the findings, we elaborate a process model of institutional repair that explains the role of agents and the structural constraints they face in attempting to maintain legitimacy. The model enhances institutional understandings of legitimacy maintenance in three main respects: it proposes a view of legitimacy maintenance as a controversy-based process progressing through stakeholders' justifications vis-à -vis a public audience; it demonstrates the role of meta-level 'orders of worth' as multiple modalities for agreement which shape stakeholders' public justifications during controversies; and it highlights the capacities that stakeholders deploy in developing robust justifications out of a plurality of forms of agreement. © 2011 The Authors. Journal of Management Studies © 2011 Blackwell Publishing Ltd and Society for the Advancement of Management Studies
Why do users trust algorithms? A review and conceptualization of initial trust and trust over time
Algorithms are increasingly playing a pivotal role in organizations' day-to-day operations; however, a general distrust of artificial intelligence-based algorithms and automated processes persists. This aversion to algorithms raises questions about the drivers that lead managers to trust or reject their use. This conceptual paper aims to provide an integrated review of how users experience the encounter with AI-based algorithms over time. This is important for two reasons: first, their functional activities change over the course of time through machine learning; and second, users' trust develops with their level of knowledge of a particular algorithm. Based on our review, we propose an integrative framework to explain how usersâ perceptions of trust change over time. This framework extends current understandings of trust in AI-based algorithms in two areas: First, it distinguishes between the formation of initial trust and trust over time in AI-based algorithms, and specifies the determinants of trust in each phase. Second, it links the transition between initial trust in AI-based algorithms and trust over time to representations of the technology as either human-like or system-like. Finally, it considers the additional determinants that intervene during this transition phase.</p
Why do users trust algorithms? A review and conceptualization of initial trust and trust over time
Algorithms are increasingly playing a pivotal role in organizations' day-to-day operations; however, a general distrust of artificial intelligence-based algorithms and automated processes persists. This aversion to algorithms raises questions about the drivers that lead managers to trust or reject their use. This conceptual paper aims to provide an integrated review of how users experience the encounter with AI-based algorithms over time. This is important for two reasons: first, their functional activities change over the course of time through machine learning; and second, users' trust develops with their level of knowledge of a particular algorithm. Based on our review, we propose an integrative framework to explain how usersâ perceptions of trust change over time. This framework extends current understandings of trust in AI-based algorithms in two areas: First, it distinguishes between the formation of initial trust and trust over time in AI-based algorithms, and specifies the determinants of trust in each phase. Second, it links the transition between initial trust in AI-based algorithms and trust over time to representations of the technology as either human-like or system-like. Finally, it considers the additional determinants that intervene during this transition phase.</p
Technology Emergence as a Structuring Process:A Complexity Theory Perspective on Blockchain
Drawing on complexity theory, we investigate the structuring processes and underlying mechanisms underpinning the emergence of a new technology. Empirically, we track the emergence of blockchain technology by examining international patents issued between 2009 and 2020. Our results indicate that technology emergence follows an evolutionary trajectory that progresses from disordered to structured interactions among the technological elements, culminating in the formation of a technological core that acts as a pole of attraction for further interactions and delineates boundaries within the technological domain. Technology structuring is fueled by what we term âtechnology fitnessâ and âself-reinforcingâ mechanisms that progressively transform primitive structures into more complex, self-organized configurations. Our study offers a novel framework of technology emergence, highlighting how dispersed bits of technological knowledge gradually aggregate into complex structures that define the specific trajectory of a particular domain
The AI of the beholder: intra-professional sensemaking of an epistemic technology
New technologies are equivocal, triggering sensemaking responses from the individuals who encounter them. As an âepistemic technologyâ AI poses new challenges to the expertise and jurisdictions of professionals. Such challenges may be interpreted quite differently, however, depending on the specialized role identities which develop within the wider professional domain. We explore the sensemaking responses of these intra-professional groupings to the challenges posed by AI through an empirical study of professionals playing different roles (front-line, hybrid and field-level) in the field of radiology within NHS England. We found that these intra-professional groupings sought to make sense of AI through a triadic view focused on the interplay of professional, client and technology. This sensemaking, arising from different jurisdictional contexts, led individual professionals to perceive that their agency was diminished, complemented or enhanced as a result of the introduction of AI. Our findings contribute to the literature on professions and AI by showing how intra-professional differences affect sensemaking responses to AI as a jurisdictional contestant
On the way to Ithaka [1] : Commemorating the 50th Anniversary of the publication of Karl E. Weickâs The Social Psychology of Organizing
Karl E. Weickâs The Social Psychology of Organizing has been one of the most influential books in organization studies, providing the theoretical underpinnings of several research programs. Importantly, the book is widely credited with initiating the process turn in the field, leading to the âgerundizingâ of management and organization studies: the persistent effort to understand organizational phenomena as ongoing accomplishments. The emphasis of the book on organizing (rather than on organizations) and its links with sensemaking have made it the most influential treatise on organizational epistemology. In this introduction, we review Weickâs magnum opus, underline and assess its key themes, and suggest ways in which several of them may be taken forward
- âŠ