1,101 research outputs found

    Research, Literacy, and Communication Education: New Challenges Facing Disinformation

    Get PDF
    The information that comes through digital media and social networks is increasing. This potential access to almost infinite information makes it difficult to select relevant content with a good understanding. It is therefore necessary to generate research that thoroughly analyses the phenomenon of communication and information in the digital age. For this reason, this monograph presents different research studies that highlight the need for greater media literacy and education in order to prevent the existence and dissemination of fake news. Citizens must know how to deal with disinformation and be able to detect the source of bad intentions behind information. Therefore, people need to be aware of the new communication challenges in order to determine what is important, which media they can trust, and where information has been misused or manipulated. In conclusion, society must be prepared to face new challenges related to misinformation. An educated and digitally literate society will be able to face these problems and be prepared to face the new communication challenges, including interaction with social networks, new audiences, new media, fake news, etc

    Revista Mediterránea de Comunicación. Vol. 11, n. 2 (2020)

    Get PDF

    Conscionable consumption: a feminist grounded theory of porn consumer ethics

    Get PDF
    Much scholarship on pornography consumption has revolved around porn harms or porn empowerment discourses. Moving away from pro- and anti-porn agendas, the research presented in this thesis was designed as an exploratory, qualitative investigation of consumer experiences of pornography, using grounded theory in an effort to transcend the polarised porn debates. By means of a two-stage data collection process involving an online group activity and in-depth interviews, this research set out to extend our understanding of how feminists experience, understand and articulate their engagements with porn. Grounded theory’s focus on iterative data collection, structured analysis and inductive theory development lent itself to several key aims for this project: (a) eschewing, as far as possible, commonly-held assumptions about the research topic and research subjects; (b) resisting agenda-driven frameworks that seek to validate pro- or anti-porn stances; and (c) allowing for the voices of porn consumers themselves to be heard and taken seriously, in a way that hasn’t tended to be prioritised in pornography effects research or the public arena more widely (Mowlabocus and Wood 2015: 119). The iterative approach to data collection advocated by grounded theory also enabled participants to take a more agentive role in determining the direction of the research. As a result, certain elements of the project took unforeseen trajectories, shedding light on additional substantive areas for inquiry beyond those initially intended. Namely, the study provided key insights into the interaction between ethics and practice in porn consumption amongst London feminists. This gave rise to the development of the 'conscionable consumption' model; a theoretical framework for conceptualising the experiences and processes described. Results indicated that feminists’ experiences of porn consumption were heavily influenced by their beliefs about what constituted ‘ethical enough’ (conscionable). These were accompanied by contemplative moments, whose nature tended to correlate with the degree to which the individual felt they had strayed from their own conceptions of conscionable practice, and the degree to which these decisions could be justified or dismissed afterwards. Respondents described an interactive relationship between such reflections and future intentions and/or attitudes, illustrating a cycle of evolving and adapting behaviour complemented by fluctuating definitions of conscionability. In this way, rather than referring to an achieved or failed ‘ethical consumer’ status, the porn ethics project was conceptualised as an ongoing process of ‘conscionable’ negotiation. Such findings enhance our understanding of the ways in which ethics and porn use are woven together and navigated by feminist consumers of pornography, whilst simultaneously extending our knowledge of a demographic hitherto unexplored within both the fields of porn studies and consumer ethics research alike. Keywords: feminism, pornography, consumer ethics, conscionable consumptio

    Ocularcentrism and Deepfakes: Should Seeing Be Believing?

    Get PDF
    The pernicious effects of misinformation were starkly exposed on January 6, 2021, when a violent mob of protestors stormed the nation’s capital, fueled by false claims of election fraud. As policymakers wrestle with various proposals to curb misinformation online, this Article highlights one of the root causes of our vulnerability to misinformation, specifically, the epistemological prioritization of sight above all other senses (“ocularcentrism”). The increasing ubiquity of so-called “deepfakes”—hyperrealistic, digitally altered videos of events that never occurred—has further exposed the vulnerabilities of an ocularcentric society, in which technology-mediated sight is synonymous with knowledge. This Article traces the evolution of visual manipulation technologies that have exploited ocularcentrism and evaluates different means of addressing the issues raised by deepfakes, including the use of copyright law

    Transdisciplinary AI Observatory -- Retrospective Analyses and Future-Oriented Contradistinctions

    Get PDF
    In the last years, AI safety gained international recognition in the light of heterogeneous safety-critical and ethical issues that risk overshadowing the broad beneficial impacts of AI. In this context, the implementation of AI observatory endeavors represents one key research direction. This paper motivates the need for an inherently transdisciplinary AI observatory approach integrating diverse retrospective and counterfactual views. We delineate aims and limitations while providing hands-on-advice utilizing concrete practical examples. Distinguishing between unintentionally and intentionally triggered AI risks with diverse socio-psycho-technological impacts, we exemplify a retrospective descriptive analysis followed by a retrospective counterfactual risk analysis. Building on these AI observatory tools, we present near-term transdisciplinary guidelines for AI safety. As further contribution, we discuss differentiated and tailored long-term directions through the lens of two disparate modern AI safety paradigms. For simplicity, we refer to these two different paradigms with the terms artificial stupidity (AS) and eternal creativity (EC) respectively. While both AS and EC acknowledge the need for a hybrid cognitive-affective approach to AI safety and overlap with regard to many short-term considerations, they differ fundamentally in the nature of multiple envisaged long-term solution patterns. By compiling relevant underlying contradistinctions, we aim to provide future-oriented incentives for constructive dialectics in practical and theoretical AI safety research

    Testing Human Ability To Detect Deepfake Images of Human Faces

    Get PDF
    Deepfakes are computationally-created entities that falsely represent reality. They can take image, video, and audio modalities, and pose a threat to many areas of systems and societies, comprising a topic of interest to various aspects of cybersecurity and cybersafety. In 2020 a workshop consulting AI experts from academia, policing, government, the private sector, and state security agencies ranked deepfakes as the most serious AI threat. These experts noted that since fake material can propagate through many uncontrolled routes, changes in citizen behaviour may be the only effective defence. This study aims to assess human ability to identify image deepfakes of human faces (StyleGAN2:FFHQ) from nondeepfake images (FFHQ), and to assess the effectiveness of simple interventions intended to improve detection accuracy. Using an online survey, 280 participants were randomly allocated to one of four groups: a control group, and 3 assistance interventions. Each participant was shown a sequence of 20 images randomly selected from a pool of 50 deepfake and 50 real images of human faces. Participants were asked if each image was AI-generated or not, to report their confidence, and to describe the reasoning behind each response. Overall detection accuracy was only just above chance and none of the interventions significantly improved this. Participants' confidence in their answers was high and unrelated to accuracy. Assessing the results on a per-image basis reveals participants consistently found certain images harder to label correctly, but reported similarly high confidence regardless of the image. Thus, although participant accuracy was 62% overall, this accuracy across images ranged quite evenly between 85% and 30%, with an accuracy of below 50% for one in every five images. We interpret the findings as suggesting that there is a need for an urgent call to action to address this threat

    Automated editorial control:Responsibility for news personalisation under European media law

    Get PDF
    News personalisation allows social and traditional media media to show each individual different information that is ‘relevant’ to them. The technology plays an important role in the digital media environment, as it navigates individuals through the vast amounts of content available online. However, determining what news an individual should see involves nuanced editorial judgment. The public and legal debate have highlighted the dangers, ranging filter bubbles to polarisation, that could result from ignoring the need for such editorial judgment.This dissertation analyses how editorial responsibility should be safeguarded in the context of news personalisation. It argues that a key challenge to the responsible implementation of news personalisation lies in the way it changes the exercise of editorial control. Rather than an editor deciding what news is on the frontpage, personalisation algorithms’ recommendations are influenced by software engineers, news recipients, business departments, product managers, and/or editors and journalists. The dissertation uses legal and empirical research to analyse the roles and responsibilities of three central actors: traditional media, platforms, and news users. It concludes law can play an important role by enabling stakeholders to control personalisation in line with editorial values. It can do so by for example ensuring the availability of metrics that allow editors to evaluate personalisation algorithms, or by enabling individuals to understand and influence how personalisation shapes their news diet. At the same time, law must ensure an appropriate allocation of responsibility in the face of fragmenting editorial control, including by moving towards cooperative responsibility for platforms and ensuring editors can control the design of personalisation algorithms

    Deepfake: Definitions, Performance Metrics and Standards, Datasets and Benchmarks, and a Meta-Review

    Get PDF
    Recent advancements in AI, especially deep learning, have contributed to a significant increase in the creation of new realistic-looking synthetic media (video, image, and audio) and manipulation of existing media, which has led to the creation of the new term ``deepfake''. Based on both the research literature and resources in English and in Chinese, this paper gives a comprehensive overview of deepfake, covering multiple important aspects of this emerging concept, including 1) different definitions, 2) commonly used performance metrics and standards, and 3) deepfake-related datasets, challenges, competitions and benchmarks. In addition, the paper also reports a meta-review of 12 selected deepfake-related survey papers published in 2020 and 2021, focusing not only on the mentioned aspects, but also on the analysis of key challenges and recommendations. We believe that this paper is the most comprehensive review of deepfake in terms of aspects covered, and the first one covering both the English and Chinese literature and sources
    corecore