3,993 research outputs found

    Man vs machine – Detecting deception in online reviews

    Get PDF
    This study focused on three main research objectives: analyzing the methods used to identify deceptive online consumer reviews, evaluating insights provided by multi-method automated approaches based on individual and aggregated review data, and formulating a review interpretation framework for identifying deception. The theoretical framework is based on two critical deception-related models, information manipulation theory and self-presentation theory. The findings confirm the interchangeable characteristics of the various automated text analysis methods in drawing insights about review characteristics and underline their significant complementary aspects. An integrative multi-method model that approaches the data at the individual and aggregate level provides more complex insights regarding the quantity and quality of review information, sentiment, cues about its relevance and contextual information, perceptual aspects, and cognitive material

    Information Overload, Multi-tasking, and the Socially Networked Jury: Why Prosecutors Should Approach the Media Gingerly

    Get PDF
    The rise of computer technology, the internet, rapid news dissemination, multi-tasking, and social networking have wrought changes in human psychology that alter how we process news media. More specifically, news coverage of high-profile trials necessarily focuses on emotionally-overwrought, attention-grabbing information disseminated to a public having little ability to process that information critically. The public’s capacity for empathy is likewise reduced, making it harder for trial processes to overcome the unfair prejudice created by the high-profile trial. Market forces magnify these changes. Free speech concerns limit the ability of the law to alter media coverage directly, and the tools available to trial judges to minimize harm to trial fairness are toothless. The usual solution has been lawyers’ ethics rules designed to channel their communications with the press, particularly rules focusing on prosecutors. This piece addresses these concerns, using a recent proposed revision to the American Bar Association Criminal Justice Standards for the Prosecution Function as a jumping off point for the discussion. Those Standards, like most state ethics rules, prohibit prosecutors from making “public statements that the prosecutor reasonably should know will have a substantial likelihood of materially prejudicing a criminal proceeding.” Drawing on cognitive science, behavioral economics, rumor-transmission studies, and jury research, this article argues that a substantial likelihood of material prejudice to criminal proceedings from prosecutor statements to the press will always be present in high profile cases. Accordingly, the rules generally governing prosecutor dealings with the press, including the latest version of those rules embodied in the proposed Standards, are unrealistic. Better rules are theoretically possible. Nevertheless, this article concludes, such rules are not politically realistic. Accordingly, this piece recommends modest changes to the proposed standards’ commentary to alert prosecutors to the true nature of the risks arising from their contact with the media and recommending prosecutor training and internal and external accountability mechanisms to improve prosecutor performance in this area. Table of Contents I. Introduction II. Information Overload and Its Consequences A. A Day Spent in Overdrive B. Consequences of Overdrive: A First Look 1. Affective Consequences 2. Media Structure 3. Sources of Judgment Error III. The Decline of Deep Thought and of Empathy A. The Basic Argument B. Criticisms and Caveats C. Declining Empathy 1. False Net Rumors and How They Spread 2. Raced Effects 3. Difficulties of Responding 4. Jurors and the Media IV. Conclusion

    Standardised library instruction assessment: an institution-specific approach

    Get PDF
    Introduction We explore the use of a psychometric model for locally-relevant, information literacy assessment, using an online tool for standardised assessment of student learning during discipline-based library instruction sessions. Method A quantitative approach to data collection and analysis was used, employing standardised multiple-choice survey questions followed by individual, cognitive interviews with undergraduate students. The assessment tool was administered to five general education psychology classes during library instruction sessions. AnalysisDescriptive statistics were generated by the assessment tool. Results. The assessment tool proved a feasible means of measuring student learning. While student scores improved on every survey question, there was uneven improvement from pre-test to post-test for different questions. Conclusion Student scores showed more improvement for some learning outcomes over others, thus, spending time on fewer concepts during instruction sessions would enable more reliable evaluation of student learning. We recommend using digital learning objects that address basic research skills to enhance library instruction programmes. Future studies will explore different applications of the assessment tool, provide more detailed statistical analysis of the data and shed additional light on the significance of overall scores

    Layered evaluation of interactive adaptive systems : framework and formative methods

    Get PDF
    Peer reviewedPostprin

    Datasets, Clues and State-of-the-Arts for Multimedia Forensics: An Extensive Review

    Full text link
    With the large chunks of social media data being created daily and the parallel rise of realistic multimedia tampering methods, detecting and localising tampering in images and videos has become essential. This survey focusses on approaches for tampering detection in multimedia data using deep learning models. Specifically, it presents a detailed analysis of benchmark datasets for malicious manipulation detection that are publicly available. It also offers a comprehensive list of tampering clues and commonly used deep learning architectures. Next, it discusses the current state-of-the-art tampering detection methods, categorizing them into meaningful types such as deepfake detection methods, splice tampering detection methods, copy-move tampering detection methods, etc. and discussing their strengths and weaknesses. Top results achieved on benchmark datasets, comparison of deep learning approaches against traditional methods and critical insights from the recent tampering detection methods are also discussed. Lastly, the research gaps, future direction and conclusion are discussed to provide an in-depth understanding of the tampering detection research arena

    Identifying Experts in Question \& Answer Portals: A Case Study on Data Science Competencies in Reddit

    Full text link
    The irreplaceable key to the triumph of Question & Answer (Q&A) platforms is their users providing high-quality answers to the challenging questions posted across various topics of interest. Recently, the expert finding problem attracted much attention in information retrieval research. In this work, we inspect the feasibility of supervised learning model to identify data science experts in Reddit. Our method is based on the manual coding results where two data science experts labelled expert, non-expert and out-of-scope comments. We present a semi-supervised approach using the activity behaviour of every user, including Natural Language Processing (NLP), crowdsourced and user feature sets. We conclude that the NLP and user feature sets contribute the most to the better identification of these three classes It means that this method can generalise well within the domain. Moreover, we present different types of users, which can be helpful to detect various types of users in the future

    Virtual Reality Games for Motor Rehabilitation

    Get PDF
    This paper presents a fuzzy logic based method to track user satisfaction without the need for devices to monitor users physiological conditions. User satisfaction is the key to any product’s acceptance; computer applications and video games provide a unique opportunity to provide a tailored environment for each user to better suit their needs. We have implemented a non-adaptive fuzzy logic model of emotion, based on the emotional component of the Fuzzy Logic Adaptive Model of Emotion (FLAME) proposed by El-Nasr, to estimate player emotion in UnrealTournament 2004. In this paper we describe the implementation of this system and present the results of one of several play tests. Our research contradicts the current literature that suggests physiological measurements are needed. We show that it is possible to use a software only method to estimate user emotion

    Multimedia Forensics

    Get PDF
    This book is open access. Media forensics has never been more relevant to societal life. Not only media content represents an ever-increasing share of the data traveling on the net and the preferred communications means for most users, it has also become integral part of most innovative applications in the digital information ecosystem that serves various sectors of society, from the entertainment, to journalism, to politics. Undoubtedly, the advances in deep learning and computational imaging contributed significantly to this outcome. The underlying technologies that drive this trend, however, also pose a profound challenge in establishing trust in what we see, hear, and read, and make media content the preferred target of malicious attacks. In this new threat landscape powered by innovative imaging technologies and sophisticated tools, based on autoencoders and generative adversarial networks, this book fills an important gap. It presents a comprehensive review of state-of-the-art forensics capabilities that relate to media attribution, integrity and authenticity verification, and counter forensics. Its content is developed to provide practitioners, researchers, photo and video enthusiasts, and students a holistic view of the field

    Emotion Recognition by Video: A review

    Full text link
    Video emotion recognition is an important branch of affective computing, and its solutions can be applied in different fields such as human-computer interaction (HCI) and intelligent medical treatment. Although the number of papers published in the field of emotion recognition is increasing, there are few comprehensive literature reviews covering related research on video emotion recognition. Therefore, this paper selects articles published from 2015 to 2023 to systematize the existing trends in video emotion recognition in related studies. In this paper, we first talk about two typical emotion models, then we talk about databases that are frequently utilized for video emotion recognition, including unimodal databases and multimodal databases. Next, we look at and classify the specific structure and performance of modern unimodal and multimodal video emotion recognition methods, talk about the benefits and drawbacks of each, and then we compare them in detail in the tables. Further, we sum up the primary difficulties right now looked by video emotion recognition undertakings and point out probably the most encouraging future headings, such as establishing an open benchmark database and better multimodal fusion strategys. The essential objective of this paper is to assist scholarly and modern scientists with keeping up to date with the most recent advances and new improvements in this speedy, high-influence field of video emotion recognition
    • 

    corecore