6 research outputs found

    Assuming Data Integrity and Empirical Evidence to The Contrary

    Get PDF
    Background: Not all respondents to surveys apply their minds or understand the posed questions, and as such provide answers which lack coherence, and this threatens the integrity of the research. Casual inspection and limited research of the 10-item Big Five Inventory (BFI-10), included in the dataset of the World Values Survey (WVS), suggested that random responses may be common. Objective: To specify the percentage of cases in the BRI-10 which include incoherent or contradictory responses and to test the extent to which the removal of these cases will improve the quality of the dataset. Method: The WVS data on the BFI-10, measuring the Big Five Personality (B5P), in South Africa (N=3 531), was used. Incoherent or contradictory responses were removed. Then the cases from the cleaned-up dataset were analysed for their theoretical validity. Results: Only 1 612 (45.7%) cases were identified as not including incoherent or contradictory responses. The cleaned-up data did not mirror the B5P- structure, as was envisaged. The test for common method bias was negative. Conclusion: In most cases the responses were incoherent. Cleaning up the data did not improve the psychometric properties of the BFI-10. This raises concerns about the quality of the WVS data, the BFI-10, and the universality of B5P-theory. Given these results, it would be unwise to use the BFI-10 in South Africa. Researchers are alerted to do a proper assessment of the psychometric properties of instruments before they use it, particularly in a cross-cultural setting

    Leading Towards Voice and Innovation: The Role of Psychological Contract

    Get PDF
    Background: Empirical evidence generally suggests that psychological contract breach (PCB) leads to negative outcomes. However, some literature argues that, occasionally, PCB leads to positive outcomes. Aim: To empirically determine when these positive outcomes occur, focusing on the role of psychological contract (PC) and leadership style (LS), and outcomes such as employ voice (EV) and innovative work behaviour (IWB). Method: A cross-sectional survey design was adopted, using reputable questionnaires on PC, PCB, EV, IWB, and leadership styles. Correlation analyses were used to test direct links within the model, while regression analyses were used to test for the moderation effects. Results: Data with acceptable psychometric properties were collected from 11 organisations (N=620). The results revealed that PCB does not lead to substantial changes in IWB. PCB correlated positively with prohibitive EV, but did not influence promotive EV, which was a significant driver of IWB. Leadership styles were weak predictors of EV and IWB, and LS only partially moderated the PCB-EV relationship. Conclusion: PCB did not lead to positive outcomes. Neither did LS influencing the relationships between PCB and EV or IWB. Further, LS only partially influenced the relationships between variables, and not in a manner which positively influence IWB

    In Crowd Veritas: Leveraging Human Intelligence To Fight Misinformation

    Get PDF
    The spread of online misinformation has important effects on the stability of democracy. The sheer size of digital content on the web and social media and the ability to immediately access and share it has made it difficult to perform timely fact-checking at scale. Truthfulness judgments are usually made by experts, like journalists for political statements. A different approach can be relying on a (non-expert) crowd of human judges to perform fact-checking. This leads to the following research question: can such human judges detect and objectively categorize online (mis)information? Several extensive studies based on crowdsourcing are performed to answer. Thousands of truthfulness judgments over two datasets are collected by recruiting a crowd of workers from crowdsourcing platforms and the expert judgments are compared with the crowd ones. The results obtained allow for concluding that the workers are indeed able to do such. There is a limited understanding of factors that influence worker participation in longitudinal studies across different crowdsourcing marketplaces. A large-scale survey aimed at understanding how these studies are performed using crowdsourcing is run across multiple platforms. The answers collected are analyzed from both a quantitative and a qualitative point of view. A list of recommendations for task requesters to conduct these studies effectively is provided together with a list of best practices for crowdsourcing platforms. Truthfulness is a subtle matter: statements can be just biased, imprecise, wrong, etc. and a unidimensional truth scale cannot account for such differences. The crowd workers are asked to judge seven different dimensions of truthfulness selected based on existing literature. The newly collected crowdsourced judgments show that the workers are indeed reliable when compared to an expert-provided gold standard. Cognitive biases are human processes that often help minimize the cost of making mistakes but keep assessors away from an objective judgment of information. A review of the cognitive biases which might manifest during the fact-checking process is presented together with a list of countermeasures that can be adopted. An exploratory study on the previously collected data set is thus performed. The findings are used to formulate hypotheses concerning which individual characteristics of statements or judges and what cognitive biases may affect crowd workers' truthfulness judgments. The findings suggest that crowd workers' degree of belief in science has an impact, that they generally overestimate truthfulness, and that their judgments are indeed affected by various cognitive biases. Automated fact-checking systems to combat misinformation spreading exist, however, their complexity usually makes them opaque to the end user, making it difficult to foster trust in the system. The E-BART model is introduced with the hope of making progress on this front. E-BART can provide a truthfulness prediction for a statement, and jointly generate a human-readable explanation. An extensive human evaluation of the impact of explanations generated by the model is conducted, showing that the explanations increase the human ability to spot misinformation. The whole set of data collected and analyzed in this thesis is publicly released to the research community at: https://doi.org/10.17605/OSF.IO/JR6VC.The spread of online misinformation has important effects on the stability of democracy. The information that is consumed every day influences human decision-making processes. The sheer size of digital content on the web and social media and the ability to immediately access and share it has made it difficult to perform timely fact-checking at scale. Indeed, fact-checking is a complex process that involves several activities. A long-term goal can be building a so-called human-in-the-loop system to cope with (mis)information by measuring truthfulness in real-time (e.g., as they appear on some social media, news outlets, and so on) using a combination of crowd-powered data, human intelligence, and machine learning techniques. In recent years, crowdsourcing has become a popular method for collecting to collect reliable truthfulness judgments in order to scale up and help study the manual fact-checking effort. Initially, this thesis investigates whether human judges can detect and objectively categorize online (mis)information and which is the environment that allows obtaining the best results. Then, the impact of cognitive biases on human assessors while judging information truthfulness is addressed. A categorization of cognitive biases is proposed together with countermeasures to combat their effects and a bias-aware judgment pipeline for fact-checking. Lastly, an approach able to predict information truthfulness and, at the same time, generate a natural language explanation supporting the prediction itself is proposed. The machine-generated explanations are evaluated to understand whether they are useful for the human assessors to better judge the truthfulness of information items. A collaborative process between systems, crowd workers, and expert fact checkers would provide a scalable and decentralized hybrid mechanism to cope with the increasing volume of online misinformation

    Task Allocation in Foraging Robot Swarms:The Role of Information Sharing

    Get PDF
    Autonomous task allocation is a desirable feature of robot swarms that collect and deliver items in scenarios where congestion, caused by accumulated items or robots, can temporarily interfere with swarm behaviour. In such settings, self-regulation of workforce can prevent unnecessary energy consumption. We explore two types of self-regulation: non-social, where robots become idle upon experiencing congestion, and social, where robots broadcast information about congestion to their team mates in order to socially inhibit foraging. We show that while both types of self-regulation can lead to improved energy efficiency and increase the amount of resource collected, the speed with which information about congestion flows through a swarm affects the scalability of these algorithms
    corecore