693 research outputs found

    Perceived Reality of Images of Women in Magazines

    Get PDF
    Researchers have posited for decades that media, and magazines in particular, have a negative impact on women\u27s body satisfaction. However, that relationship is a complex one. Several theories, including sociocultural and social comparison theories, have been used to better understand the relationship. Furthermore, with advances in technology, magazine images of women are now fictional portrayals rather than reflections of reality. This project addresses the question of the extent to which women perceive magazine images to be real and whether there is a relationship between that perception of reality and body satisfaction. Perceived reality as it relates to magazine images is an area that has been relatively neglected by researchers, so the present study investigated the issue in an exploratory manner. To that end, a new scale was created to measure perceived reality of magazine images of women. It was implemented in an online survey of women ages 18 to 34. Data were gathered using a convenience sample. Three lines of analysis were then conducted. First, after identifying the most effective items, the reliability, face validity, and convergent validity of the perceived reality of magazine images of women scale were largely established. Within the validity analysis, a confirmatory factor analysis of the scale was conducted which illuminated three factors: real world, factuality, and emotional involvement. A summated version of the new scale was then created for further analysis. In a second line of analysis, inconsistent and weak relationships were found between perceived reality of magazine images of women and body satisfaction. This result is discussed in light of sociocultural and social comparison theories. In a final line of analysis, consistently moderate relationships were found with the new scale and awareness of digital alteration. The impact of this finding on helping understand what makes up reality judgments is discussed

    Veracity Roadmap: Is Big Data Objective, Truthful and Credible?

    Get PDF
    This paper argues that big data can possess different characteristics, which affect its quality. Depending on its origin, data processing technologies, and methodologies used for data collection and scientific discoveries, big data can have biases, ambiguities, and inaccuracies which need to be identified and accounted for to reduce inference errors and improve the accuracy of generated insights. Big data veracity is now being recognized as a necessary property for its utilization, complementing the three previously established quality dimensions (volume, variety, and velocity), But there has been little discussion of the concept of veracity thus far. This paper provides a roadmap for theoretical and empirical definitions of veracity along with its practical implications. We explore veracity across three main dimensions: 1) objectivity/subjectivity, 2) truthfulness/deception, 3) credibility/implausibility – and propose to operationalize each of these dimensions with either existing computational tools or potential ones, relevant particularly to textual data analytics. We combine the measures of veracity dimensions into one composite index – the big data veracity index. This newly developed veracity index provides a useful way of assessing systematic variations in big data quality across datasets with textual information. The paper contributes to the big data research by categorizing the range of existing tools to measure the suggested dimensions, and to Library and Information Science (LIS) by proposing to account for heterogeneity of diverse big data, and to identify information quality dimensions important for each big data type

    Overview of the CLEF-2018 checkthat! lab on automatic identification and verification of political claims

    Get PDF
    We present an overview of the CLEF-2018 CheckThat! Lab on Automatic Identification and Verification of Political Claims. In its starting year, the lab featured two tasks. Task 1 asked to predict which (potential) claims in a political debate should be prioritized for fact-checking; in particular, given a debate or a political speech, the goal was to produce a ranked list of its sentences based on their worthiness for fact-checking. Task 2 asked to assess whether a given check-worthy claim made by a politician in the context of a debate/speech is factually true, half-true, or false. We offered both tasks in English and in Arabic. In terms of data, for both tasks, we focused on debates from the 2016 US Presidential Campaign, as well as on some speeches during and after the campaign (we also provided translations in Arabic), and we relied on comments and factuality judgments from factcheck.org and snopes.com, which we further refined manually. A total of 30 teams registered to participate in the lab, and 9 of them actually submitted runs. The evaluation results show that the most successful approaches used various neural networks (esp. for Task 1) and evidence retrieval from the Web (esp. for Task 2). We release all datasets, the evaluation scripts, and the submissions by the participants, which should enable further research in both check-worthiness estimation and automatic claim verification

    Global-Liar: Factuality of LLMs over Time and Geographic Regions

    Full text link
    The increasing reliance on AI-driven solutions, particularly Large Language Models (LLMs) like the GPT series, for information retrieval highlights the critical need for their factuality and fairness, especially amidst the rampant spread of misinformation and disinformation online. Our study evaluates the factual accuracy, stability, and biases in widely adopted GPT models, including GPT-3.5 and GPT-4, contributing to reliability and integrity of AI-mediated information dissemination. We introduce 'Global-Liar,' a dataset uniquely balanced in terms of geographic and temporal representation, facilitating a more nuanced evaluation of LLM biases. Our analysis reveals that newer iterations of GPT models do not always equate to improved performance. Notably, the GPT-4 version from March demonstrates higher factual accuracy than its subsequent June release. Furthermore, a concerning bias is observed, privileging statements from the Global North over the Global South, thus potentially exacerbating existing informational inequities. Regions such as Africa and the Middle East are at a disadvantage, with much lower factual accuracy. The performance fluctuations over time suggest that model updates may not consistently benefit all regions equally. Our study also offers insights into the impact of various LLM configuration settings, such as binary decision forcing, model re-runs and temperature, on model's factuality. Models constrained to binary (true/false) choices exhibit reduced factuality compared to those allowing an 'unclear' option. Single inference at a low temperature setting matches the reliability of majority voting across various configurations. The insights gained highlight the need for culturally diverse and geographically inclusive model training and evaluation. This approach is key to achieving global equity in technology, distributing AI benefits fairly worldwide.Comment: 24 pages, 12 figures, 9 table
    • …
    corecore