31 research outputs found

    Fighting Fire with Fire: Can ChatGPT Detect AI-generated Text?

    Full text link
    Large language models (LLMs) such as ChatGPT are increasingly being used for various use cases, including text content generation at scale. Although detection methods for such AI-generated text exist already, we investigate ChatGPT's performance as a detector on such AI-generated text, inspired by works that use ChatGPT as a data labeler or annotator. We evaluate the zero-shot performance of ChatGPT in the task of human-written vs. AI-generated text detection, and perform experiments on publicly available datasets. We empirically investigate if ChatGPT is symmetrically effective in detecting AI-generated or human-written text. Our findings provide insight on how ChatGPT and similar LLMs may be leveraged in automated detection pipelines by simply focusing on solving a specific aspect of the problem and deriving the rest from that solution. All code and data is available at \url{https://github.com/AmritaBh/ChatGPT-as-Detector}.Comment: to appear in SIGKDD Exploration

    Harnessing Artificial Intelligence to Combat Online Hate: Exploring the Challenges and Opportunities of Large Language Models in Hate Speech Detection

    Full text link
    Large language models (LLMs) excel in many diverse applications beyond language generation, e.g., translation, summarization, and sentiment analysis. One intriguing application is in text classification. This becomes pertinent in the realm of identifying hateful or toxic speech -- a domain fraught with challenges and ethical dilemmas. In our study, we have two objectives: firstly, to offer a literature review revolving around LLMs as classifiers, emphasizing their role in detecting and classifying hateful or toxic content. Subsequently, we explore the efficacy of several LLMs in classifying hate speech: identifying which LLMs excel in this task as well as their underlying attributes and training. Providing insight into the factors that contribute to an LLM proficiency (or lack thereof) in discerning hateful content. By combining a comprehensive literature review with an empirical analysis, our paper strives to shed light on the capabilities and constraints of LLMs in the crucial domain of hate speech detection

    LLMs as Counterfactual Explanation Modules: Can ChatGPT Explain Black-box Text Classifiers?

    Full text link
    Large language models (LLMs) are increasingly being used for tasks beyond text generation, including complex tasks such as data labeling, information extraction, etc. With the recent surge in research efforts to comprehend the full extent of LLM capabilities, in this work, we investigate the role of LLMs as counterfactual explanation modules, to explain decisions of black-box text classifiers. Inspired by causal thinking, we propose a pipeline for using LLMs to generate post-hoc, model-agnostic counterfactual explanations in a principled way via (i) leveraging the textual understanding capabilities of the LLM to identify and extract latent features, and (ii) leveraging the perturbation and generation capabilities of the same LLM to generate a counterfactual explanation by perturbing input features derived from the extracted latent features. We evaluate three variants of our framework, with varying degrees of specificity, on a suite of state-of-the-art LLMs, including ChatGPT and LLaMA 2. We evaluate the effectiveness and quality of the generated counterfactual explanations, over a variety of text classification benchmarks. Our results show varied performance of these models in different settings, with a full two-step feature extraction based variant outperforming others in most cases. Our pipeline can be used in automated explanation systems, potentially reducing human effort

    Consumers’ Preference Leading Purchase Intention toward Manipulation of Form and Transparency for Juice Packaging Design

    Get PDF
    Packaging plays a fundamental role on consumer’s intention to purchase, as it may be the first contact between the consumer and the product. The product packaging has a crucial role to attract consumer, force them to choose the product and act as a brand communication vehicle. The point of focus is how the elements of the package design affect consumer’s perceptions about products and brand. In this study, to understand the effect of package form and transparency on consumers’ pre-purchase preference of juice packaging, the participants (N=60) are asked to assess six designs against a 5-point Likert scale. The findings suggest that form and transparency in juice packaging have significant effect on consumers’ purchase intention. In addition, consumers give preference to functionality of the packaging than novelty while purchasing fruit juice product

    Detecting Harmful Agendas in News Articles

    Full text link
    Manipulated news online is a growing problem which necessitates the use of automated systems to curtail its spread. We argue that while misinformation and disinformation detection have been studied, there has been a lack of investment in the important open challenge of detecting harmful agendas in news articles; identifying harmful agendas is critical to flag news campaigns with the greatest potential for real world harm. Moreover, due to real concerns around censorship, harmful agenda detectors must be interpretable to be effective. In this work, we propose this new task and release a dataset, NewsAgendas, of annotated news articles for agenda identification. We show how interpretable systems can be effective on this task and demonstrate that they can perform comparably to black-box models.Comment: Camera-ready for ACL-WASSA 202

    Significance of cyanobacterial diversity in different ecological conditions of Meghalaya, India

    Get PDF
    The present study deals with preliminary investigation of cyanobacterial diversity in Meghalaya. A total of 75 samples were collected from 10 different ecosystems and analyzed. 65 strains of cyanobacteria isolated under 11 genera include Nostoc, Anabaena, Calothrix, Cylindrospermum, Gleocapsa, Fischerella, Plectonema, Tolypothrix, Stigonema, Loriella and Westiellopsis. Nostoc was most abundant. Diversity analysis indicated maximum Shannon’s diversity index (H) in Mawlai. Highest Simpson’s diversity index was seen in Sung Valley (0.75). Both Shannon’s and Simpson’s diversity indices were lowest in Mairang. Richness was highest in Sung valley and Syntuksiar with both the sites supporting 17 strains each. Although, highest diversity was recorded from Mawlai, richness recorded at this site was only 11 strains thereby indicating richness need not be a function of diversity in this region. This study revealed the cyanobacterial strains, which can withstand acidic pH and prevail in the region. A study on colonization also identified some potential biofertilizer strains from the region such as Nostoc punctiforme, Nostoc muscurum and Anabaena azollae that could be effective in acidic crop fields

    J-Guard: Journalism Guided Adversarially Robust Detection of AI-generated News

    Full text link
    The rapid proliferation of AI-generated text online is profoundly reshaping the information landscape. Among various types of AI-generated text, AI-generated news presents a significant threat as it can be a prominent source of misinformation online. While several recent efforts have focused on detecting AI-generated text in general, these methods require enhanced reliability, given concerns about their vulnerability to simple adversarial attacks. Furthermore, due to the eccentricities of news writing, applying these detection methods for AI-generated news can produce false positives, potentially damaging the reputation of news organizations. To address these challenges, we leverage the expertise of an interdisciplinary team to develop a framework, J-Guard, capable of steering existing supervised AI text detectors for detecting AI-generated news while boosting adversarial robustness. By incorporating stylistic cues inspired by the unique journalistic attributes, J-Guard effectively distinguishes between real-world journalism and AI-generated news articles. Our experiments on news articles generated by a vast array of AI models, including ChatGPT (GPT3.5), demonstrate the effectiveness of J-Guard in enhancing detection capabilities while maintaining an average performance decrease of as low as 7% when faced with adversarial attacks.Comment: This Paper is Accepted to The 13th International Joint Conference on Natural Language Processing and the 3rd Conference of the Asia-Pacific Chapter of the Association for Computational Linguistics (IJCNLP-AACL 2023
    corecore