234 research outputs found

    Beyond the culture effect on credibility perception on microblogs

    Get PDF
    We investigated the credibility perception of tweet readers from the USA and by readers from eight Arabic countries; our aim was to understand if credibility was affected by country and/or by culture. Results from a crowd-sourcing experiment, showed a wide variety of factors affected credibility perception, including a tweet author's gender, profile image, username style, location, and social network overlap with the reader. We found that culture determines readers' credibility perception, but country has no effect. We discuss the implications of our findings for user interface design and social media systems

    Computational fact checking from knowledge networks

    Get PDF
    Traditional fact checking by expert journalists cannot keep up with the enormous volume of information that is now generated online. Computational fact checking may significantly enhance our ability to evaluate the veracity of dubious information. Here we show that the complexities of human fact checking can be approximated quite well by finding the shortest path between concept nodes under properly defined semantic proximity metrics on knowledge graphs. Framed as a network problem this approach is feasible with efficient computational techniques. We evaluate this approach by examining tens of thousands of claims related to history, entertainment, geography, and biographical information using a public knowledge graph extracted from Wikipedia. Statements independently known to be true consistently receive higher support via our method than do false ones. These findings represent a significant step toward scalable computational fact-checking methods that may one day mitigate the spread of harmful misinformation

    A study on text-score disagreement in online reviews

    Get PDF
    In this paper, we focus on online reviews and employ artificial intelligence tools, taken from the cognitive computing field, to help understanding the relationships between the textual part of the review and the assigned numerical score. We move from the intuitions that 1) a set of textual reviews expressing different sentiments may feature the same score (and vice-versa); and 2) detecting and analyzing the mismatches between the review content and the actual score may benefit both service providers and consumers, by highlighting specific factors of satisfaction (and dissatisfaction) in texts. To prove the intuitions, we adopt sentiment analysis techniques and we concentrate on hotel reviews, to find polarity mismatches therein. In particular, we first train a text classifier with a set of annotated hotel reviews, taken from the Booking website. Then, we analyze a large dataset, with around 160k hotel reviews collected from Tripadvisor, with the aim of detecting a polarity mismatch, indicating if the textual content of the review is in line, or not, with the associated score. Using well established artificial intelligence techniques and analyzing in depth the reviews featuring a mismatch between the text polarity and the score, we find that -on a scale of five stars- those reviews ranked with middle scores include a mixture of positive and negative aspects. The approach proposed here, beside acting as a polarity detector, provides an effective selection of reviews -on an initial very large dataset- that may allow both consumers and providers to focus directly on the review subset featuring a text/score disagreement, which conveniently convey to the user a summary of positive and negative features of the review target.Comment: This is the accepted version of the paper. The final version will be published in the Journal of Cognitive Computation, available at Springer via http://dx.doi.org/10.1007/s12559-017-9496-

    Discovery and integration of Web 2.0 content into geospatial information infrastructures: a use case in wild fire monitoring

    Get PDF
    Efficient environment monitoring has become a major concern for society to guarantee sustainable development. For instance, forest fire detection and analysis is important to provide early warning systems and identify impact. In this environmental context, availability of up-to-date information is very important for reducing damages caused. Environmental applications are deployed on top of GeospatialInformation Infrastructures (GIIs) to manage information pertaining to our environment. Suchinfrastructures are traditionally top-down infrastructures that do not consider user participation. This provokes a bottleneck in content publication and therefore a lack of content availability. On the contrary mainstream IT systems and in particular the emerging Web 2.0 Services allow active user participation that is becoming a massive source of dynamic geospatial resources. In this paper, we present a webservice, that implements a standard interface, offers a unique entry point for spatial data discovery, both in GII services and web 2.0 services. We introduce a prototype as proof of concept in a forest fire scenario, where we illustrate how to leverage scientific data and web 2.0 conten

    Mitigating risk in ecommerce transactions: perceptions of information credibility and the role of user-generated ratings in product quality and purchase intention

    Full text link
    Although extremely popular, electronic commerce environments often lack information that has traditionally served to ensure trust among exchange partners. Digital technologies, however, have created new forms of "electronic word-of-mouth," which offer new potential for gathering credible information that guides consumer behaviors. We conducted a nationally representative survey and a focused experiment to assess how individuals perceive the credibility of online commercial information, particularly as compared to information available through more traditional channels, and to evaluate the specific aspects of ratings information that affect people's attitudes toward ecommerce. Survey results show that consumers rely heavily on web-based information as compared to other channels, and that ratings information is critical in the evaluation of the credibility of online commercial information. Experimental results indicate that ratings are positively associated with perceptions of product quality and purchase intention, but that people attend to average product ratings, but not to the number of ratings or to the combination of the average and the number of ratings together. Thus suggests that in spite of valuing the web and ratings as sources of commercial information, people use ratings information suboptimally by potentially privileging small numbers of ratings that could be idiosyncratic. In addition, product quality is shown to mediate the relationship between user ratings and purchase intention. The practical and theoretical implications of these findings are considered for ecommerce scholars, consumers, and vendors. © 2014 Springer Science+Business Media New York

    Investigator experiences with financial conflicts of interest in clinical trials

    Get PDF
    <p>Abstract</p> <p>Background</p> <p>Financial conflicts of interest (fCOI) can introduce actions that bias clinical trial results and reduce their objectivity. We obtained information from investigators about adherence to practices that minimize the introduction of such bias in their clinical trials experience.</p> <p>Methods</p> <p>Email survey of clinical trial investigators from Canadian sites to learn about adherence to practices that help maintain research independence across all stages of trial preparation, conduct, and dissemination. The main outcome was the proportion of investigators that reported full adherence to preferred trial practices for all of their trials conducted from 2001-2006, stratified by funding source.</p> <p>Results</p> <p>844 investigators responded (76%) and 732 (66%) provided useful information. Full adherence to preferred clinical trial practices was highest for institutional review of signed contracts and budgets (82% and 75% of investigators respectively). Lower rates of full adherence were reported for the other two practices in the trial preparation stage (avoidance of confidentiality clauses, 12%; trial registration after 2005, 39%). Lower rates of full adherence were reported for 7 practices in the trial conduct (35% to 43%) and dissemination (53% to 64%) stages, particularly in industry funded trials. 269 investigators personally experienced (n = 85) or witnessed (n = 236) a fCOI; over 70% of these situations related to industry trials.</p> <p>Conclusion</p> <p>Full adherence to practices designed to promote the objectivity of research varied across trial stages and was low overall, particularly for industry funded trials.</p

    The use of bibliometrics for assessing research : possibilities, limitations and adverse effects

    Get PDF
    Researchers are used to being evaluated: publications, hiring, tenure and funding decisions are all based on the evaluation of research. Traditionally, this evaluation relied on judgement of peers but, in the light of limited resources and increased bureaucratization of science, peer review is getting more and more replaced or complemented with bibliometric methods. Central to the introduction of bibliometrics in research evaluation was the creation of the Science Citation Index (SCI)in the 1960s, a citation database initially developed for the retrieval of scientific information. Embedded in this database was the Impact Factor, first used as a tool for the selection of journals to cover in the SCI, which then became a synonym for journal quality and academic prestige. Over the last 10 years, this indicator became powerful enough to influence researchers’ publication patterns in so far as it became one of the most important criteria to select a publication venue. Regardless of its many flaws as a journal metric and its inadequacy as a predictor of citations on the paper level, it became the go-to indicator of research quality and was used and misused by authors, editors, publishers and research policy makers alike. The h-index, introduced as an indicator of both output and impact combined in one simple number, has experienced a similar fate, mainly due to simplicity and availability. Despite their massive use, these measures are too simple to capture the complexity and multiple dimensions of research output and impact. This chapter provides an overview of bibliometric methods, from the development of citation indexing as a tool for information retrieval to its application in research evaluation, and discusses their misuse and effects on researchers’ scholarly communication behavior
    • …
    corecore