62 research outputs found
Interactivity/Reciprocity (Online Discussions/Discussion Quality)
Interactivity (or reciprocity) is a key dimension to assess the deliberative quality of online discussions. In quantitative content analyses, this dimension measures if participants engage in dialog with each other and refer to each other.
Field of application/Theoretical foundation
Most studies on online discussions draw on deliberative norms to measure the quality of their discourse (e.g., Esau et al., 2017; Friess et al., 2021; Rowe, 2015; Ziegele et al., 2020; Zimmermann, 2017). Deliberation is an important concept for the study of (political) online discussions (Ziegele et al., 2020). It focuses on a free and equal exchange of arguments to bridge social differences and legitimize political decisions (Dryzek et al., 2019; Fishkin, 1991, Habermas, 2015). Interactivity is a key dimension of deliberative quality, since deliberation is always a reciprocal and dialogical process (Goodin, 2000; Zimmermann, 2017). Participants engage in a dialogic exchange with each other, reflecting on other views and perspectives, and referring to each other (Friess et al., 2021; Ziegele et al., 2020). This reciprocal process includes both responding and listening (Barber, 1984; Graham, 2009). Interactivity is considered essential for desirable effects of deliberation such as learning, tolerance building and opinion change (Estlund & Landmore, 2018; Friess et al., 2021).
Â
References/Combination with other methods
Besides quantitative content analyses, the (deliberative) quality of online discussions is examined with qualitative content analyses and discourse analyses (e.g., Graham & Witschge, 2003; Price & Capella, 2002). Furthermore, participantsâ perceptions of the quality of online discussions are investigated with qualitative interviews (e.g., Engelke, 2019; Ziegele, 2016) or a combination of qualitative interviews and content analyÂsis (DĂaz Noci et al., 2012).
Cross-references
Interactivity is one of five dimensions of deliberative quality in this database written by the same author. Accordingly, there are overlaps with the entries on inclusivity, rationality, explicit civility, and storytelling regarding theoretical background, references/combinations with other methods, and some example studies.
Â
Information on Esau et al. (2017)
Authors: Katharina Esau, Dennis Friess, & Christiane Eilders
Research question: âHow does platform design affect the level of deliberative quality?â (p. 323)
Object of analysis: âWe conducted a quantitative content analysis of user comments left in a news forum, on news websites, and on Facebook news pages concerning the same journalistic content on two topics [âŠ] A sample of news articles [âŠ] with related user comments, was drawn from the online platforms of four German news media [âŠ] The first step of the sampling process consisted of 18 news articles from which 3,341 comments were collected [âŠ] In the second step for each article, up to 100 sequential comments were randomly selected for content analysis, leading to a total sample of 1,801 comments (979 on Facebook, 591 on news websites, and 231 in the news forum)â (p. 331).
Time frame of analysis: December 2015
Info about variables
Level of analysis: individual comment
Variables and reliability: see Table 1
Table 1: Variables and Reliability (Esau et al., 2017, pp. 332-333):
Dimension
Measure
Definition
RCA
Cohenâs Kappa
Reciprocity
General engagement
This measure captures whether a comment addresses another comment.
.92
-
Â
Argumentative engagement
This measure captures whether a comment addresses a specific argument made in another comment.
.77
.542
Â
Critical engagement
This measure captures whether a comment is critical of another comment.
.89
-
Â
Â
Â
Â
n = 40; 12 coders
Values: Dichotomous measures (yes, no)Â
Â
Information on Heinbach & Wilms (2022)
Authors: Dominique Heinbach & Lena K. Wilms (Codebook by Dominique Heinbach, Marc Ziegele, & Lena K. Wilms)
Research question: Which attributes differentiate moderated from unmoderated comments?
Object of analysis: The quantitative content analysis was based on a stratified random sample of moderated and not moderated comments (N = 1.682) from the German online participation platform â#meinfernsehen202â [#myTV2021], a citizen participation platform to discuss the future of public broadcasting in Germany.
Time frame of analysis: November 24, 2020 to March 3, 2021
Info about variables
Level of analysis: User comment
Variables and reliability: see Table 2
Table 2: Variables and reliability (Heinbach & Wilms, 2022)
Dimension
Measure
Definition
Krippendorffâs α
Reciprocity
Reference to other users or to the community
Does the comment refer to at least one other user, a group of users, or all users in the community?
.78
Â
Reference to the content of other comments
Does the comment refer to content, arguments or positions in other comments?
.78
Â
Critical reference
Does the comment refer to other comments in a critical manner?
.86
Â
Â
Â
n = 159, 3 coders
Â
Values: All variables were coded on a four-point scale (1 = clearly not present; 2 = rather not present; 3 = rather present; 4 = clearly present). Detailed explanations and examples for each value are provided in the Codebook (in German).
Codebook: in the appendix of this entry (in German)
Â
Information on Stromer-Galley (2007)
Author: Jennifer Stromer-Galley
Research question: The aim of the paper was developing a coding scheme for academics and practitioners of deliberation to systematically measure what happens during group deliberations (p. 1; p. 7).
Object of analysis: The author conducted a secondary analysis of online group discussions (23 groups with 5-12 participants) in an experiment called âThe Virtual Agora Projectâ at Carnegie Mellon Unversitiy in Pittsburgh, Pennsylvania. Participants attended the discussions from dormitory rooms that were equipped with a computer, headphones, and microphone. The group discussions were recorded and transcribed for analysis (pp. 7-8). Although strictly speaking the study does not analyze media content, the coding scheme has provided the basis for numerous other studies on the deliberative quality of online discussions (e.g., Rowe, 2015; Stroud et al., 2015; Ziegele et al., 2020).
Time frame of analysis: Three weeks in July 2004 (p. 7).
Info about variables
Level of analysis: Level of the turn: Speaking contribution of a participant. Participants had to get âin lineâ to speak. When a speaker had finished their turn, the software activated the next speaker (max. 3 minutes per turn) (p. 8). Level of the thought: Coders segmented each turn into thought units before coding the categories. âA thought is defined as an utterance (from a single sentence to multiple sentences) that expresses an idea on a topic. A change in topic signaled a change in thought. A second indicator of a change in thought was a change in the type of talk. The distinct types of talk that this coding captured were the following: talk about the problem of public schools, talk about the process of the talk, talk about the process of the deliberation, and social talkâ (p. 9).
Variables an values: see Table 3
Reliability: âTwo coders spent nearly two months developing and training with the coding scheme. The intercoder agreement measures [âŠ] were established from coding 3 of the 23 groups, which were randomly selected. [âŠ] Cohenâs Kappas of the coding elements described above are as follows: thought statements on the problem of public schools, .95; [âŠ] turn type (new topic, continuing self, responding to others) .97; meta-talk, 1.0 [âŠ]â (p. 13-14).
Codebook: in the appendix (pp. 22-33)
Table 3: Variables and values of the dimension âengagementâ (Stromer-Galley, 2007, p.12; pp. 24-26).
Category
Level
Description
Value
Definition
Turn-type
Turn
Identify whether and to whom this turn is referring.
Starting a new topic
A new topic (not prompted by the moderator).
Â
Â
Â
Respond on topic
A turn that is in response to a prior speaker or is on a topic that has been discussed. Includes responding to multiple speakers.
Â
Â
Â
Respond to moderator
A turn that is a response to a prompt or question from the moderator.
Â
Â
Â
Continue self
A turn that seems not to respond to anything a prior speaker said but to continue the current speakerâs ideas from one of his or her prior turns.
Problem
Thought
Talk about the problem is talk that focuses on the issue under consideration.
Question
A genuine question directed to another speaker that is trying to seek information or an opinion from others.
Metatalk
Â
Thought
Metatalk is talk about the talk. It attempts to step back and assess what has transpired or is transpiring in the interaction.
Consensus
Consensus metatalk is talk about the speakerâs sense of consensus of the group (âI think we all agree that . . . .â), including an explanation for the collectiveâs opinions or the collectiveâs behavior (Weâre asking you these questions because . .).
Â
Â
Â
Conflict
Highlighting some disagreement or conflict in the group (âI sense some disagreement around . . . .â).
Â
Â
Â
Clarify own
Clarify the speakerâs own opinion or fact statement (âwhat Iâm trying to say isâ). Itâs an attempt to clarify what the speaker means. This will arise ONLY after theyâve provided an opinion, NOT a question, and are now trying to clarify their original opinion on the problem, likely because they believe someone has misunderstood them.
Â
Â
Â
Clarify other
Clarify someone elseâs argument/opinion or fact statement (âSally, so, what youâre saying is . . . â). It is an attempt to clarify what someone else means. Pay attention to the use of another participantsâ name. That can be a sign of metatalk of anotherâs position.
Â
Example studies
Esau, K., FleuĂ, D. & Nienhaus, S.âM. (2021). Different Arenas, Different Deliberative Quality? Using a Systemic Framework to Evaluate Online Deliberation on Immigration Policy in Germany. Policy & Internet, 13(1), 86â112. https://doi.org/10.1002/poi3.232
Esau, K., Friess, D. & Eilders, C. (2017). Design Matters! An Empirical Analysis of Online Deliberation on Different News Platforms. Policy & Internet, 9(3), 321â342. https://doi.org/10.1002/poi3.154
Esau, K., FrieĂ, D. & Eilders, C. (2019). Online-Partizipation jenseits klassischer Deliberation: Eine Analyse zum VerhĂ€ltnis unterschiedlicher Deliberationskonzepte in Nutzerkommentaren auf Facebook-Nachrichtenseiten und Beteiligungsplattformen. In I. Engelmann, M. Legrand & H. Marzinkowski (Hrsg.), Digital Communication Research: Bd. 6. Politische Partizipation im Medienwandel (S. 221â245).
Friess, D., Ziegele, M. & Heinbach, D. (2021). Collective Civic Moderation for Deliberation? Exploring the Links between Citizensâ Organized Engagement in Comment Sections and the Deliberative Quality of Online Discussions. Political Communication, 38(5), 624â646. https://doi.org/10.1080/10584609.2020.1830322
Heinbach, D. & Wilms, L. K. (2022): Der Einsatz von Moderation bei #meinfernsehen2021 [The deployment of moderation at #meinfernsehen2021]. In: F. Gerlach, C. Eilders & K. Schmitz (Eds.): #meinfernsehen2021. Partizipationsverfahren zur Zukunft des öffentlich-rechtlichen Fernsehens. Baden-Baden: Nomos.
Rowe, I. (2015). Deliberation 2.0: Comparing the Deliberative Quality of Online News User Comments Across Platforms. Journal of Broadcasting & Electronic Media, 59(4), 539â555. https://doi.org/10.1080/08838151.2015.1093482
Stromer Galley, J. (2007). Measuring Deliberation's Content: A Coding Scheme. Journal of Public Deliberation, 3(1), Article 12.
Ziegele, M., Quiring, O., Esau, K. & Friess, D. (2020). Linking News Value Theory With Online Deliberation: How News Factors and Illustration Factors in News Articles Affect the Deliberative Quality of User Discussions in SNSâ Comment Sections. Communication Research, 47(6), 860-890. https://doi.org/10.1177/0093650218797884
Zimmermann, T. (2017). Digitale Diskussionen: Ăber politische Partizipation mittels Online-Leserkommentaren. Edition Politik: Bd. 44. transcript Verlag. http://www.content-select.com/index.php?id=bib_view&ean=9783839438886
Further references
Barber, B. R. (1984). Strong democracy: Participatory politics for a new age. University of California Press.
DĂaz Noci, J., Domingo, D., Masip, P., MicĂł, J. L. & Ruiz, C. (2012). Comments in news, democracy booster or journalistic nightÂmare: Assessing the quality and dynamics of citizen debates in Catalan online newÂspapers. #ISOJ, 2(1), 46â64. https://isoj.org/ wp-content/uploads/2016/10/ISOJ_JourÂnal_V2_N1_2012_Spring.pdf#page=46
Dryzek, J. S., BĂ€chtiger, A., Chambers, S., Cohen, J., Druckman, J. N., Felicetti, A., Fishkin, J. S., Farrell, D. M., Fung, A., Gutmann, A., Landemore, H., Mansbridge, J., Marien, S., Neblo, M. A., Niemeyer, S., SetĂ€lĂ€, M., Slothuus, R., Suiter, J., Thompson, D. & Warren, M. E. (2019). The crisis of democracy and the science of deliberation. Science (New York, N.Y.), 363(6432), 1144â1146. https://doi.org/10.1126/science.aaw2694
Engelke, K. M. (2019). Enriching the Conversation: Audience Perspectives on the Deliberative Nature and Potential of User Comments for News Media. Digital Journalism, 8(4), 1â20. https://doi.org/10.1080/21670811.2019.1680567
Estlund, D. & Landemore, H. (2018). The epistemic value of democratic deliberation. In A. BĂ€chtiger, J. S. Dryzek, J. J. Mansbridge & M. E. Warren (Hrsg.), Oxford handbooks online. The Oxford handbook of deliberative democracy: An introduction (S. 113â131). Oxford University Press.
Fishkin, J. S. (1991). Democracy and deliberation: New directions for democratic reform. Yale University Press. http://www.jstor.org/stable/10.2307/j.ctt1dt006v https://doi.org/10.2307/j.ctt1dt006v
Goodin, R. E. (2000). Democratic Deliberation Within. Philosophy & Public Affairs, 29(1), 81â109. https://doi.org/10.1111/j.1088-4963.2000.00081.x
Graham, T. (2009). What's Wife Swap got to do with it? Talking politics in the net-based public sphere Amsterdam: University of Amsterdam DOI: 10.13140/RG.2.1.3413.0088
Graham, T. & Witschge, T. (2003). In Search of Online Deliberation: Towards a New Method for Examining the Quality of Online Discussions. Communications, 28(2). https://doi.org/10.1515/comm.2003.012
Habermas, J. (2015). Between facts and norms: Contributions to a discourse theory of law and democracy (Reprinted.). Polity Press.
Price, V. & Cappella, J. N. (2002). Online deliberation and its influence: The Electronic Dialogue Project in Campaign 2000. IT&Society, 1(1), 303â329. https://citeseerx.ist.psu.edu/viewdoc/download?doi=10.1.1.9.5945&rep=rep1&type=pdf
Stroud, N. J., Scacco, J. M., Muddiman, A., & Curry, A. L. (2015). Changing Deliberative Norms on News Organizations' Facebook Sites. Journal of Computer-Mediated Communication, 20(2), 188â203. https://doi.org/10.1111/jcc4.12104
Ziegele, M. (2016). Nutzerkommentare als Anschlusskommunikation: Theorie und qualitative Analyse des Diskussionswerts von Online-Nachrichten [The Discussion Value of Online News. An Analysis of User Comments on News Platforms]. Springer VS
Like trainer, like bot? Inheritance of bias in algorithmic content moderation
The internet has become a central medium through which `networked publics'
express their opinions and engage in debate. Offensive comments and personal
attacks can inhibit participation in these spaces. Automated content moderation
aims to overcome this problem using machine learning classifiers trained on
large corpora of texts manually annotated for offence. While such systems could
help encourage more civil debate, they must navigate inherently normatively
contestable boundaries, and are subject to the idiosyncratic norms of the human
raters who provide the training data. An important objective for platforms
implementing such measures might be to ensure that they are not unduly biased
towards or against particular norms of offence. This paper provides some
exploratory methods by which the normative biases of algorithmic content
moderation systems can be measured, by way of a case study using an existing
dataset of comments labelled for offence. We train classifiers on comments
labelled by different demographic subsets (men and women) to understand how
differences in conceptions of offence between these groups might affect the
performance of the resulting models on various test sets. We conclude by
discussing some of the ethical choices facing the implementers of algorithmic
moderation systems, given various desired levels of diversity of viewpoints
amongst discussion participants.Comment: 12 pages, 3 figures, 9th International Conference on Social
Informatics (SocInfo 2017), Oxford, UK, 13--15 September 2017 (forthcoming in
Springer Lecture Notes in Computer Science
We Should Not Get Rid of Incivility Online
This work is licensed under a Creative Commons Attribution-NonCommercial 4.0 International License.Incivility and toxicity have become concepts du jour in research about social media. The clear normative implication in much of this research is that incivility is bad and should be eliminated. Extensive researchâincluding some that weâve authoredâhas been dedicated to finding ways to reduce or eliminate incivility from online discussion spaces. In our work as part of the Civic Signals Initiative, weâve been thinking carefully about what metrics should be adopted by social media platforms eager to create better spaces for their users. When we tell people about this project, removing incivility from the platforms frequently comes up as a suggested metric. In thinking about incivility, however, weâve become less convinced that it is desirable, or even possible, for social media platforms to remove all uncivil content. In this short essay, we discuss research on incivility, our rationale for a more complicated normative stance regarding incivility, and what other orientations may be more useful. We conclude with a post mortem arguing that we should not abandon research on incivility altogether, but we should recognize the limitations of a concept that is difficult to universalize
Future directions for online incivility research
This chapter makes a normative argument that incivility scholars should shift directions in exploring aversive online communication. Specifically, it is vital for scholars to consider various subsets of incivility (e.g., profanity or hate speech), rather than treat incivility as a monolith and to acknowledge that different types are not equally damaging to democracy or interpersonal relations. Furthermore, this chapter calls for more attention to how incivility of all types hurts those from marginalized groups and how and why those with less societal power are more frequent targets of toxicity, as well as how to protect them. It also proposes that the role of online platforms, like Facebook, WeChat, and WhatsApp, be integorrated more fully in regard to incivility and that incivility be studied in concert with other types of problematic speech, such as misinformation and disinformation
Online Political Comments: Americans Talk About the Election Through a âHorse-Raceâ Lens
This study examined whether user-generated comments posted on news stories about the 2016 U.S. presidential campaign focused on candidatesâ policies or on horse-race elements of the election, such as who is winning or losing. Using a quantitative content analysis (n = 1,881), we found that most comments had neither horse-race nor policy elements, but that horse-race elements were more frequent in comments than policy, mirroring what is found in news coverage. The public were more likely to âlikeâ or âupvoteâ comments that contained either policy or horse-race elements, relative to other comments, although the relationship was slightly stronger for horse race
The Political Uses and Abuses of Civility and Incivility
After exploring the challenges involved in defining incivility, this chapter addresses the evolution of the concept, notes the dispute over trend lines, and précises work on its psychological effects. It then outlines some functions that civility and incivility serve, such as the functions of differentiating and mobilizing, marginalizing the powerless, expressing, and deliberating. The use of calls for civility as a means of social control is discussed, and then the chapter flags questions worthy of additional attention
Medir o agenda-setting nos comentårios dos leitores às eleiçÔes legislativas de 2015
O presente estudo propÔe o uso dos comentårios dos leitores como objeto
para medir os fenĂłmenos de agenda-setting. A anĂĄlise de frequĂȘncia de palavras
é o método escolhido para operacionalizar a avaliação do agenda-setting.
O método proposto foi testado numa amostra de 741 artigos e 52.064 comentårios
do jornal Expresso, que englobam um perĂodo de cinco semanas
entre 4 de setembro e 10 de outubro de 2015. A anĂĄlise de frequĂȘncia de
palavras permitiu nĂŁo sĂł avaliar quais os temas abordados pelo jornal que
mais ecoaram nos comentårios, mas também estudar o processo inverso,
ou seja, quais os temas abordados pelos comentadores que foram sub-representados
nas notĂcias.info:eu-repo/semantics/publishedVersio
Third Space, Social Media and Everyday Political Talk
Theoretical and empirical research into online politics to date has primarily focused on what might be called formal politics or on how activists and social movements utilize social media to pursue their goals. However, in this chapter, we argue that there is much to be gained by investigating how political talk and engagement emerges in everyday, online, lifestyle communities: i.e. third spaces. Such spaces are not intended for political purposes, but rather â during the course of everyday talk â become political through the connections people make between their everyday lives and the political/social issues of the day. In this chapter, we develop a theoretically informed argument for research that focuses on everyday informal political talk in online third spaces
- âŠ