18 research outputs found

    Nuevos enfoques de la investigación sobre credibilidad en redes y medios sociales

    Get PDF
    Desde la creación del proyecto Web Credibility Research en la universidad californiana de Stanford en 1998 se han sucedido numerosos estudios sobre credibilidad en todo lo relacionado con el ciberespacio. Internet ha cambiado mucho en veinte años, desde la Web 1.0 unidireccional al internet social actual, caracterizado por ser un medio de comunicación horizontal más que de información gracias a las plataformas y aplicaciones móviles aparecidas en la última década (Facebook, YouTube, Twitter o WhatsApp, entre otras). El objeto de este artículo es dar un repaso a la actividad investigadora sobre evaluación de la credibilidad en internet durante los últimos diez años aproximadamente, desde que la red se ha convertido en totalmente móvil y el teléfono inteligente está desplazando al ordenador como dispositivo principal de comunicación online. En este escenario, la actividad investigadora sobre credibilidad tiene nuevos retos y enfoques, pero también mejores herramientas de analítica, software avanzado en tratamiento de datos, nuevas técnicas basadas en algoritmos y otros recursos. Ahora son objeto de estudio temas como la influencia de la interacción social en la credibilidad de los usuarios de Twitter y otras redes sociales; la importancia de la capacitación tecnológica o alfabetización digital para poder realizar acertados juicios de credibilidad online, o la credibilidad en el internet colaborativo de los servicios de alojamiento turístico, transporte o foros de autoayuda

    Veracity Roadmap: Is Big Data Objective, Truthful and Credible?

    Get PDF
    This paper argues that big data can possess different characteristics, which affect its quality. Depending on its origin, data processing technologies, and methodologies used for data collection and scientific discoveries, big data can have biases, ambiguities, and inaccuracies which need to be identified and accounted for to reduce inference errors and improve the accuracy of generated insights. Big data veracity is now being recognized as a necessary property for its utilization, complementing the three previously established quality dimensions (volume, variety, and velocity), But there has been little discussion of the concept of veracity thus far. This paper provides a roadmap for theoretical and empirical definitions of veracity along with its practical implications. We explore veracity across three main dimensions: 1) objectivity/subjectivity, 2) truthfulness/deception, 3) credibility/implausibility – and propose to operationalize each of these dimensions with either existing computational tools or potential ones, relevant particularly to textual data analytics. We combine the measures of veracity dimensions into one composite index – the big data veracity index. This newly developed veracity index provides a useful way of assessing systematic variations in big data quality across datasets with textual information. The paper contributes to the big data research by categorizing the range of existing tools to measure the suggested dimensions, and to Library and Information Science (LIS) by proposing to account for heterogeneity of diverse big data, and to identify information quality dimensions important for each big data type

    Promises and lies: Can observers detect deception in written messages

    Get PDF
    Abstract: We design a laboratory experiment to examine predictions of trustworthiness in a novel three-person trust game. We investigate whether and why observers of the game can predict the trustworthiness of hand-written communications. Observers report their perception of the trustworthiness of messages, and make predictions about the senders’ behavior. Using observers’ decisions, we are able to classify messages as “promises” or “empty talk.” Drawing from substantial previous research, we hypothesize that certain factors influence whether a sender is likely to honor a message and/or whether an observer perceives the message as likely to behonored: the mention of money; the use of encompassing words; and message length. We find that observers have more trust in longer messages and “promises”; promises that mention money are significantly more likely to be broken; and observers trust equally in promises that do and do not mention money. Overall, observers perform slightly better than chance at predicting whether a message will be honored. We attribute this result to observers’ ability to distinguish promises from empty talk, and to trust promises more than empty talk. However, within each of these two categories, observers are unable to discern between messages that senders will honor from those that they will not

    Deception Detection and Rumor Debunking for Social Media

    Get PDF
    Abstract The main premise of this chapter is that the time is ripe for more extensive research and development of social media tools that filter out intentionally deceptive information such as deceptive memes, rumors and hoaxes, fake news or other fake posts, tweets and fraudulent profiles. Social media users’ awareness of intentional manipulation of online content appears to be relatively low, while the reliance on unverified information (often obtained from strangers) is at an all-time high. I argue there is need for content verification, systematic fact-checking and filtering of social media streams. This literature survey provides a background for understanding current automated deception detection research, rumor debunking, and broader content verification methodologies, suggests a path towards hybrid technologies, and explains why the development and adoption of such tools might still be a significant challenge

    Presage criteria for blog credibility assessment using Rasch analysis / Sharifah Aliman, Saadiah Yahya and Syed Ahmad Aljunid

    Get PDF
    Credibility is an entrance path to trust model environment and its impact is so tremendous to the blogosphere and real world. The natural unique of blogs features compared with ordinary websites, the rise of information propagation through blogs and the scarcity of blog credibility research have sparked the impetus to investigate blog credibility assessment. This paper presents an exploratory study to identify credibility factors used by local blog users in assessing blogs. Our survey respondents are from local university that consisting of academic staff and students who are computer and Internet literate. Data was analyzed using person-item distribution map (PIDM) Rasch analysis. The results indicated that 75% credibility criteria, which was validated using misfit item, are preferable for assessing blog credibility. However, only 46% credibility criteria, which was validated using person gap difference, are acceptable. It also shows that update blog content, reputation of blog authors as well as recognition and performance of blog site are inapt criteria to assess blog credibility. Inspite, most of our credibility criteria are fit to formulate our new blog credibility model. Our study also suggests that Rasch analysis is significantly worthy to validate criteria that most of previous researches employed descriptive statistics and factor analytic approach’

    Web Credibility: Features Exploration and Credibility Prediction

    Get PDF
    International audienceData Stream Processing (DSP) applications are often modelled as a directed acyclic graph: operators with data streams among them. Inter-operator communications can have a significant impact on the latency of DSP applications, accounting for 86% of the total latency. Despite their impact, there has been relatively little work on optimizing inter-operator communications, focusing on reducing inter-node traffic but not considering inter-process communication (IPC) inside a node, which often generates high latency due to the multiple memory-copy operations. This paper describes the design and implementation of TurboStream, a new DSP system designed specifically to address the high latency caused by inter-operator communications. To achieve this goal, we introduce (1) an improved IPC framework with OSRBuffer, a DSP-oriented buffer, to reduce memory-copy operations and waiting time of each single message when transmitting messages between the operators inside one node, and (2) a coarse-grained scheduler that consolidates operator instances and assigns them to nodes to diminish the inter-node IPC traffic. Using a prototype implementation, we show that our improved IPC framework reduces the end-to-end latency of intra-node IPC by 45.64% to 99.30%. Moreover, TurboStream reduces the latency of DSP by 83.23% compared to JStorm

    Information Quality and Trustworthiness: A Topical State-of-the-Art Review

    Get PDF
    The importance and value of information cannot be disputed. It is used as basis for menial and mission-critical tasks alike. In a society where information is so easily publicised and freely accessible, however, being able to assess information quality and trustworthiness is paramount. With appreciation of this fact, our paper seeks to navigate these two mature fields and define the latest state-of-the-art. The novelty of this work is found in the provision of an up-to-date review, a research survey which considers and links provenance, quality and trustworthiness, and a literature analysis that includes a first-look review at some of these aspects within the socialmedia domain. This factor-based review should provide an ideal grounding for future research that assesses interaction between these three topics, which may then also progress to associations with information assurance and security at large. To demonstrate how some of the factors might be considered, we also examine their application to a commonplace scenario
    corecore