6 research outputs found
The Size Conundrum: Why Online Knowledge Markets Can Fail at Scale
In this paper, we interpret the community question answering websites on the
StackExchange platform as knowledge markets, and analyze how and why these
markets can fail at scale. A knowledge market framing allows site operators to
reason about market failures, and to design policies to prevent them. Our goal
is to provide insights on large-scale knowledge market failures through an
interpretable model. We explore a set of interpretable economic production
models on a large empirical dataset to analyze the dynamics of content
generation in knowledge markets. Amongst these, the Cobb-Douglas model best
explains empirical data and provides an intuitive explanation for content
generation through concepts of elasticity and diminishing returns. Content
generation depends on user participation and also on how specific types of
content (e.g. answers) depends on other types (e.g. questions). We show that
these factors of content generation have constant elasticity---a percentage
increase in any of the inputs leads to a constant percentage increase in the
output. Furthermore, markets exhibit diminishing returns---the marginal output
decreases as the input is incrementally increased. Knowledge markets also vary
on their returns to scale---the increase in output resulting from a
proportionate increase in all inputs. Importantly, many knowledge markets
exhibit diseconomies of scale---measures of market health (e.g., the percentage
of questions with an accepted answer) decrease as a function of number of
participants. The implications of our work are two-fold: site operators ought
to design incentives as a function of system size (number of participants); the
market lens should shed insight into complex dependencies amongst different
content types and participant actions in general social networks.Comment: The 27th International Conference on World Wide Web (WWW), 201
Identifying reputation collectors in community question answering (CQA) sites: Exploring the dark side of social media
YesThis research aims to identify users who are posting as well as encouraging others to post low-quality
and duplicate contents on community question answering sites. The good guys called Caretakers and
the bad guys called Reputation Collectors are characterised by their behaviour, answering pattern and
reputation points. The proposed system is developed and analysed over publicly available Stack
Exchange data dump. A graph based methodology is employed to derive the characteristic of
Reputation Collectors and Caretakers. Results reveal that Reputation Collectors are primary sources
of low-quality answers as well as answers to duplicate questions posted on the site. The Caretakers
answer limited questions of challenging nature and fetches maximum reputation against those
questions whereas Reputation Collectors answers have so many low-quality and duplicate questions
to gain the reputation point. We have developed algorithms to identify the Caretakers and Reputation
Collectors of the site. Our analysis finds that 1.05% of Reputation Collectors post 18.88% of low quality answers. This study extends previous research by identifying the Reputation Collectors and 2 how they collect their reputation points
Software of the Oppressed: Reprogramming the Invisible Discipline
This dissertation offers a critical analysis of software practices within the university and the ways they contribute to a broader status quo of software use, development, and imagination. Through analyzing the history of software practices used in the production and circulation of student and scholarly writing, I argue that this overarching software status quo has oppressive qualities in that it supports the production of passive users, or users who are unable to collectively understand and transform software code for their own interests. I also argue that the university inadvertently normalizes and strengthens the software status quo through what I call its “invisible discipline,” or the conditioning of its community—particularly students, but also faculty, librarians, staff, and other university members—to have little expectation of being able to participate in the governance or development of the software used in their academic settings. This invisible discipline not only fails to prepare students for the political struggles and practical needs of our digital age (while increasing the social divide between those who program digital technology and those who must passively accept it), but reinforces a lack of awareness of how digital technology powerfully mediates the production, circulation, and reception of knowledge at individual and collective levels. Through this analysis, I hope to show what a liberatory approach to academic technology practices might look like, as well as demonstrate—through a variety of alternative software practices in and beyond the university—the intellectual, political, and social contributions these practices might contribute to higher education and scholarly knowledge production at large. I conclude the dissertation with suggestions for “reprogramming” iv our academic technology practices, an approach that I also explored in practice in the production of this dissertation. As I describe in the Afterword, the genesis of this dissertation, as well as the production, revision, and dissemination of its drafts, were generated as part of two digital projects, Social Paper and #SocialDiss, each of which attempted in their own small way to resist the invisible discipline and the ways that conventional academic technology practices structure intellectual work. The goal of this dissertation and its related digital projects is thus to help shine light on the exciting intellectual and political potential of democratizing software development and governance in and through educational institutions