144 research outputs found
The use of software tools and autonomous bots against vandalism: eroding Wikipedia’s moral order?
English - language Wikipedia is constantly being plagued by vandalistic contributions on a massive scale. In order to fight them its volunteer contributors deploy an array of software tools and autonomous bots. After an analysis of their functioning and the ‘ coactivity ’ in use between humans and bots, this research ‘ discloses ’ the moral issues that emerge from the combined patrolling by humans and bots. Administrators provide the stronger tools only to trusted users, thereby creating a new hierarchical layer. Further, surveillance exhibits several troubling features : questionable profiling practices, the use of the controversial measure of reputation, ‘ oversurveillance ’ where quantity trumps quality, and a prospective loss of the required moral skills whenever bots take over from humans. The most troubling aspect, though, is that Wikipedia has become a Janus - faced institution. One face is the basic platform of MediaWiki software, transparent to all. Its other face is the anti - vandalism system, which, in contrast, is opaque to the average user, in particular as a result of the algorithms and neural networks in use. Finally it is argued that this secrecy impedes a much needed discussion to unfold ; a discussion that should focus on a ‘ rebalancing ’ of the anti - vandalism system and the development of more ethical information practices towards contributors
The disciplinary power of predictive algorithms:a Foucauldian perspective
Big Data are increasingly used in machine learning in order to create predictive models. How are predictive practices that use such models to be situated? In the field of surveillance studies many of its practitioners assert that "governance by discipline" has given way to "governance by risk". The individual is dissolved into his/her constituent data and no longer addressed. I argue that, on the contrary, in most of the contexts where predictive modelling is used, it constitutes Foucauldian discipline. Compliance to a norm occupies centre stage; suspected deviants are subjected to close attention-as the precursor of possible sanctions. The predictive modelling involved uses personal data from both the focal institution and elsewhere ("Polypanopticon"). As a result, the individual re-emerges as the focus of scrutiny. Subsequently, small excursions into Foucauldian texts discuss his discourses on the creation of the "delinquent", and on the governmental approach to smallpox epidemics. It is shown that his insights only mildly resemble prediction as based on machine learning; several conceptual steps had to be taken for modern machine learning to evolve. Finally, the options available to those subjected to predictive disciplining are discussed: to what extent can they comply, question, or resist? Through a discussion of the concepts of transparency and "gaming the system" I conclude that our predicament is gloomy, in a Kafkaesque fashion
Patenting mathematical algorithms: Whats the harm? A thought experiment
Abstract The patenting of software-related inventions is on the increase, especially in the United States. Mathematical formulas and algorithms, though, are still sacrosanct. Only under special conditions may algorithms qualify as statutory matter: if they are not solely a mathematical exercise, but if they are somehow linked with physical reality. In this article, it is argued that blanket acceptance is to be preferred. Moreover, the best results are obtained if formulas and algorithms are only protected in combination with a proof that supports them. This argument is developed by conducting a thought experiment. After describing the development of algebra from the 16th century up to the 20th (in particular, the solution of the cubic equation), the likely effects on the development of mathematics as a science are analyzed in the context of postulating a patent regime that would actually have been in force protecting mathematical inventions
Coercion or empowerment? Moderation of content in Wikipedia as 'essentially contested' bureaucratic rules
In communities of user-generated content, systems for the management of content and/or their contributors are usually accepted without much protest. Not so, however, in the case of Wikipedia, in which the proposal to introduce a system of review for new edits (in order to counter vandalism) led to heated discussions. This debate is analysed, and arguments of both supporters and opponents (of English, German and French tongue) are extracted from Wikipedian archives. In order to better understand this division of the minds, an analogy is drawn with theories of bureaucracy as developed for real-life organizations. From these it transpires that bureaucratic rules may be perceived as springing from either a control logic or an enabling logic. In Wikipedia, then, both perceptions were at work, depending on the underlying views of participants. Wikipedians either rejected the proposed scheme (because it is antithetical to their conception of Wikipedia as a community) or endorsed it (because it is consonant with their conception of Wikipedia as an organization with clearly defined boundaries). Are other open-content communities susceptible to the same kind of 'essential contestation'?
Open Source Production of Encyclopedias: Editorial Policies at the Intersection of Organizational and Epistemological Trust
The ideas behind open source software are currently applied to the production of encyclopedias. A sample of six English text-based, neutral-point-of-view, online encyclopedias of the kind are identified: h2g2, Wikipedia, Scholarpedia, Encyclopedia of Earth, Citizendium and Knol. How do these projects deal with the problem of trusting their participants to behave as competent and loyal encyclopedists? Editorial policies for soliciting and processing content are shown to range from high discretion to low discretion; that is, from granting unlimited trust to limited trust. Their conceptions of the proper role for experts are also explored and it is argued that to a great extent they determine editorial policies. Subsequently, internal discussions about quality guarantee at Wikipedia are rendered. All indications are that review and "super-review" of new edits will become policy, to be performed by Wikipedians with a better reputation. Finally, while for encyclopedias the issue of organizational trust largely coincides with epistemological trust, a link is made with theories about the acceptance of testimony. It is argued that both non-reductionist views (the "acceptance principle" and the "assurance view") and reductionist ones (an appeal to background conditions, and a-newly defined-"expertise view") have been implemented in editorial strategies over the past decade
How can contributors to open-source communities be trusted? On the assumption, inference, and substitution of trust
Open-source communities that focus on content rely squarely on the contributions of invisible strangers in cyberspace. How do such communities handle the problem of trusting that strangers have good intentions and adequate competence? This question is explored in relation to communities in which such trust is a vital issue: peer production of software (FreeBSD and Mozilla in particular) and encyclopaedia entries (Wikipedia in particular). In the context of open-source software, it is argued that trust was inferred from an underlying 'hacker ethic', which already existed. The Wikipedian project, by contrast, had to create an appropriate ethic along the way. In the interim, the assumption simply had to be that potential contributors were trustworthy; they were granted 'substantial trust'. Subsequently, projects from both communities introduced rules and regulations which partly substituted for the need to perceive contributors as trustworthy. They faced a design choice in the continuum between a high-discretion design (granting a large amount of trust to contributors) and a low-discretion design (leaving only a small amount of trust to contributors). It is found that open-source designs for software and encyclopaedias are likely to converge in the future towards a mid-level of discretion. In such a design the anonymous user is no longer invested with unquestioning trust
Developing and operating time critical applications in clouds: the state of the art and the SWITCH approach
Cloud environments can provide virtualized, elastic, controllable and high quality on-demand services for supporting complex distributed applications. However, the engineering methods and software tools used for developing, deploying and executing classical time critical applications do not, as yet, account for the programmability and controllability provided by clouds, and so time critical applications cannot yet benefit from the full potential of cloud technology. This paper reviews the state of the art of technologies involved in developing time critical cloud applications, and presents the approach of a recently funded EU H2020 project: the Software Workbench for Interactive, Time Critical and Highly self-adaptive cloud applications (SWITCH). SWITCH aims to improve the existing development and execution model of time critical applications by introducing a novel conceptual model—the application-infrastructure co-programming and control model—in which application QoS and QoE, together with the programmability and controllability of cloud environments, is included in the complete application lifecycle
- …