1,473 research outputs found

    Beyond opening up the black box: Investigating the role of algorithmic systems in Wikipedian organizational culture

    Full text link
    Scholars and practitioners across domains are increasingly concerned with algorithmic transparency and opacity, interrogating the values and assumptions embedded in automated, black-boxed systems, particularly in user-generated content platforms. I report from an ethnography of infrastructure in Wikipedia to discuss an often understudied aspect of this topic: the local, contextual, learned expertise involved in participating in a highly automated social-technical environment. Today, the organizational culture of Wikipedia is deeply intertwined with various data-driven algorithmic systems, which Wikipedians rely on to help manage and govern the "anyone can edit" encyclopedia at a massive scale. These bots, scripts, tools, plugins, and dashboards make Wikipedia more efficient for those who know how to work with them, but like all organizational culture, newcomers must learn them if they want to fully participate. I illustrate how cultural and organizational expertise is enacted around algorithmic agents by discussing two autoethnographic vignettes, which relate my personal experience as a veteran in Wikipedia. I present thick descriptions of how governance and gatekeeping practices are articulated through and in alignment with these automated infrastructures. Over the past 15 years, Wikipedian veterans and administrators have made specific decisions to support administrative and editorial workflows with automation in particular ways and not others. I use these cases of Wikipedia's bot-supported bureaucracy to discuss several issues in the fields of critical algorithms studies, critical data studies, and fairness, accountability, and transparency in machine learning -- most principally arguing that scholarship and practice must go beyond trying to "open up the black box" of such systems and also examine sociocultural processes like newcomer socialization.Comment: 14 pages, typo fixed in v

    Bots in Wikipedia: Unfolding their duties

    Get PDF
    The success of crowdsourcing systems such as Wikipedia relies on people participating in these systems. However, in this research we reveal to what extent human and machine intelligence is combined to carry out semi-automatic workflows of complex tasks. In Wikipedia, bots are used to realize such combination of human-machine intelligence. We provide an extensive overview on various edit types bots carry out in this regard through the analysis of 1,639 approved task requests. We classify existing tasks by an action-object-pair structure and reveal existing differences in their probability of occurrence depending on the investigated work context. In the context of community services, bots mainly create reports, whereas in the area of guidelines or policies bots are mostly responsible for adding templates to pages. Moreover, the analysis of existing bot tasks revealed insights that suggest general reasons, why Wikipedia’s editor community uses bots as well as approaches, how they organize machine tasks to provide a sustainable service. We conclude by discussing how these insights can prepare the foundation for further research

    Not all the bots are created equal:the Ordering Turing Test for the labelling of bots in MMORPGs

    Get PDF
    This article contributes to the research on bots in Social Media. It takes as its starting point an emerging perspective which proposes that we should abandon the investigation of the Turing Test and the functional aspects of bots in favor of studying the authentic and cooperative relationship between humans and bots. Contrary to this view, this article argues that Turing Tests are one of the ways in which authentic relationships between humans and bots take place. To understand this, this article introduces the concept of Ordering Turing Tests: these are sort of Turing Tests proposed by social actors for purposes of achieving social order when bots produce deviant behavior. An Ordering Turing Test is method for labeling deviance, whereby social actors can use this test to tell apart rule-abiding humans and rule-breaking bots. Using examples from Massively Multiplayer Online Role-Playing Games, this article illustrates how Ordering Turing Tests are proposed and justified by players and service providers. Data for the research comes from scientific literature on Machine Learning proposed for the identification of bots and from game forums and other player produced paratexts from the case study of the game Runescape

    What makes Individual I's a Collective We; Coordination mechanisms & costs

    Full text link
    For a collective to become greater than the sum of its parts, individuals' efforts and activities must be coordinated or regulated. Not readily observable and measurable, this particular aspect often goes unnoticed and understudied in complex systems. Diving into the Wikipedia ecosystem, where people are free to join and voluntarily edit individual pages with no firm rules, we identified and quantified three fundamental coordination mechanisms and found they scale with an influx of contributors in a remarkably systemic way over three order of magnitudes. Firstly, we have found a super-linear growth in mutual adjustments (scaling exponent: 1.3), manifested through extensive discussions and activity reversals. Secondly, the increase in direct supervision (scaling exponent: 0.9), as represented by the administrators' activities, is disproportionately limited. Finally, the rate of rule enforcement exhibits the slowest escalation (scaling exponent 0.7), reflected by automated bots. The observed scaling exponents are notably robust across topical categories with minor variations attributed to the topic complication. Our findings suggest that as more people contribute to a project, a self-regulating ecosystem incurs faster mutual adjustments than direct supervision and rule enforcement. These findings have practical implications for online collaborative communities aiming to enhance their coordination efficiency. These results also have implications for how we understand human organizations in general.Comment: 27 pages, 7 figure

    Wikipedia, a locus of (dis)encounters between human and non-human agents?

    Get PDF
    Produção científica Integrada no âmbito de projeto da UID4372/FCT Laboratório de Educação a Distância e Elearning, Universidade Aberta.A Wikipédia é incontornável quando se faz uma pesquisa na internet. Paralelamente, é ciclópica a proporção que atingiu ao longo das duas décadas de existência, concretizando, sem dúvida, um velho sonho da humanidade – reunir, num local, todo o conhecimento da humanidade. A Wikipédia é, de certa forma, a nova biblioteca de Alexandria, sendo uma vantagem o facto de não ter restrições em armazenar o conhecimento, nomeadamente em cerca de 300 línguas. Porém, a Wikipédia em língua inglesa tem um nível maior de maturidade. Outro fator distintivo é o facto de o seu produto resultar da contribuição de voluntários de todo o globo. No entanto, na atualidade, os contributos não se circunscrevem aos agentes humanos, mas também aos agentes não-humanos. Assim, o presente texto tem como objetivo analisar a Wikipédia enquanto sistema sociotécnico, isto é, enquadrando-o no papel destes agentes. Para tal, num primeiro momento abordaremos o fenómeno Wikipédia enquanto enciclopédia, e, num segundo momento, enquanto sistema sociotécnico, particularmente em língua portuguesa, uma vez que o papel dos agentes não-humanos tem tido um peso cada vez maior nesta enciclopédia. Ou seja, os bots são vistos como colaboradores não-humanos que intervêm quer em edições repetitivas e em série, quer evoluindo seja no seu espaço de atuação, seja na sofisticação das suas ações, portanto, não se restringindo ao conteúdo dos artigos, antes expandindo-se para a socialização dos participantes da comunidade.Wikipedia is a must when you search the internet. At the same time, the proportion it reached over the two decades of existence is cyclopean, fulfilling, without a doubt, an old dream of humanity – to gather, in one place, all the knowledge of humanity. Wikipedia is, in a way, the new library of Alexandria, being an advantage the fact that it has no restrictions on storing knowledge, namely in approximately 300 languages. However, the English language Wikipedia has a higher level of maturity. Another distinguishing factor is the fact that it is the result of the contributions from volunteers from all over the globe. However, currently, contributions are not limited to human agents, but also to non-human agents. Thus, this text aims to analyze Wikipedia as a socio-technical system, that is, framing it in the role of these agents. To this end, at first, we will approach the Wikipedia phenomenon as an encyclopedia, and, at a second moment, as a socio-technical system, particularly in Portuguese language Wikipedia, since the role of non-human agents has had an increasing weight in this encyclopedia. In other words, bots are seen as non-human collaborators that intervene either in repetitive and sequential editions, or evolving in their operating space or in the sophistication of their actions, therefore, not restricting themselves to the content of the articles, but expanding to the participants’ socialization of this community.info:eu-repo/semantics/publishedVersio

    Chatbots for Modelling, Modelling of Chatbots

    Full text link
    Tesis Doctoral inédita leída en la Universidad Autónoma de Madrid, Escuela Politécnica Superior, Departamento de Ingeniería Informática. Fecha de Lectura: 28-03-202

    Giving games a day job: developing a digital game-based resource for journalism training

    Get PDF
    Computer simulations have been commonplace in some industries such as the military, medicine and science and educators are now actively exploring their potential application to a range of disciplines. Educators and trainers have looked to the multi-billion dollar computer and video game industry for inspiration, and Marc Prensky (2001) has used the phrase digital game-based learning to describe this emerging learning and teaching framework. The purpose of this research project is to produce an Internet-delivered newsgathering/newswriting training package that can be used for an expanding, and increasingly visually literate, tertiary journalism eduction field. This thesis comprises two parts: a) the written component which describes the production of the hypertext-based journalism training scenario and, b) a prototype copy of the training scenario on CD-ROM. The Flood scenario depicts the flooding of a fictional city called Lagoon, and is based on real news stories, media releases and audio-visual material gathered during major floods in the Central West of NSW in August 1990. In its present form Flood is designed as a multi-path learning narrative, which participants must pursue and unravel in their search for news stories. My intention has been to develop a more engaging activity than is currently the case for many traditional, paper-based, approaches to journalism training exercises. Flood is also specifically designed for flexible delivery via the Internet or CD-ROM. This approach makes it especially well suited for both on-campus and distance education students. The Flood resource is at this stage a limited prototype designed as a teaching aid. A theoretical framework combining the roles of researchers and producer is discussed in the thesis. An overview of the use of simulations in journalism education contextualises the practical project, and the place of Web-based scenario simulation within an emerging teaching framework digital game-based learning is considered. There is also an examination of historical precedents for the application of technology in Australian journalism classrooms. The Flood prototype has been trailed at Charles Sturt University with on-campus undergraduate students in 2001 and 2002, and with distance education postgraduate students in 2002. Descriptions of these trials, and details of the student feedback, are provided. This project also includes an experimental narrative element, the use of a software artificial intelligence character known as a chatterbox to explore possibilities for providing a more personal and engaging experience. One of the key design intentions of this project has been consideration of ways to allow participants to develop their own lines of questioning, rather than forcing them to simply follow pre-determined paths. The thesis concludes that digital materials such as the Flood package are worthy of future development to complement the face-to-face instruction in reporting tasks, internships and classroom simulations traditionally used in journalism education and training. Computer simulations are a means for providing students with a controlled exposure to the journalistic process. However, simulation and reality are clearly two different experiences, and digital game-based learning in its present form does not provide a complete substitute for journalism as it is practised in the workplace

    Ethics, Religion, and Spiritual Health

    Get PDF
    What does human enhancement technology (HET) and artificial intelligence (AI) have to do with religion? This book explores, specifically, the intersection of HET and AI with spiritual health, Christianity, and ethics. The exploration strengthens an emergent, robust body of publications about human enhancement ethics. What does it mean to make us “better” must also address the potential spiritual implications. Concern for spiritual health promises to make the study of religion and human enhancement ethics increasingly pressing in the public sphere. Some of the most significant possible and probable spiritual impacts of HET and AI are probed. Topics include warfare, robots, chatbots, moral bioenhancement, spiritual psychotherapy, superintelligence, ecology, fasting, and psychedelics. Two sections comprise this book: one addresses spirituality in relation to HETs and AI, and one addresses Christianity in relation to HETs and AI

    Encounters with Authority: Tactics and negotiations at the periphery of participatory platforms

    Get PDF
    Digital participatory platforms like Wikipedia are often celebrated as projects that allow anyone to contribute. Any user can sign up and start contributing immediately. Similarly, projects that engage volunteers in the production of scientific knowledge create easy points of entry to make contributions. These low barriers to entry are a hallmark feature in digital participatory labor, limiting the number of hoops a new volunteer has to jump through before they can feel like they are making a difference. Such low barriers to participation at the periphery, or edges of participatory platforms, have presented a problem for organizational scholars as they wonder how such projects can achieve consistent results when opportunities to train and socialize newcomers are constrained by a need for low barriers. As a result, scholarship has focused on answering the question of newcomer learning and socialization by examining how newcomers make sense of their new digital workspaces rather than focus on how institutional constraints are imposed. In this research, I draw on a growing body of scholarship that pushes against the perception of openness and low barriers on digital participatory platforms to unpack the constraints on participation that newcomers confront and, in particular, to show how such constraints resemble characteristics of institutionalized newcomer onboarding tactics. To approach this question, I conducted 18 months of participant observation and conducted 36 interviews with experts, newcomers, and project leaders from the crowdsourced citizen science platform Planet Hunters and the peer produced encyclopedia, Wikipedia. I analyzed my data using a grounded theory research design that is sensitized using the theoretical technology of Estrid Sørensen’s Forms of Presence as a way to pay attention to the sociomaterial configurations of newcomer practice, attending to the actors (both human and nonhuman) that play a part in the constraints and affordances of newcomer participation. By drawing on Sørensen’s Forms of Presence, the analytical focus on the newcomer experience shifts from looking at either top-down institutional tactics of organizations or bottom-up individual tactics of newcomers to thinking about the characteristics of relationships newcomers have with other members and platform features and the effects of these relationships as they relate to different opportunities for learning and participation. Focusing on the different ways that learning and participation are made available affords the exploration of how the authority of existing practices in particular settings are imposed on learners despite the presence of low barriers to participation. By paying attention to the sociomaterial configuration of newcomer participation, my findings unpack the tactics that newcomers encounter at the periphery, or edges of participatory platforms, as well as how they find their work being included or excluded from the platform. I use the findings to develop a taxonomy of encounters that describes how newcomers can participate in a self-guided experience as the existing literature describes, but also experience moments of guided and targeted encounters. What this taxonomy of encounters suggests is that the periphery of participatory platforms can be at once an open space for exploration and experimentation but also a well-managed space where, despite low barriers to initial participation, a newcomer must negotiate what I describe as the guardrails of participation that define the constraints and affordances that shape their experience
    • …
    corecore