464,619 research outputs found

    Vietnamese AI Generated Text Detection

    Full text link
    In recent years, Large Language Models (LLMs) have become integrated into our daily lives, serving as invaluable assistants in completing tasks. Widely embraced by users, the abuse of LLMs is inevitable, particularly in using them to generate text content for various purposes, leading to difficulties in distinguishing between text generated by LLMs and that written by humans. In this study, we present a dataset named ViDetect, comprising 6.800 samples of Vietnamese essay, with 3.400 samples authored by humans and the remainder generated by LLMs, serving the purpose of detecting text generated by AI. We conducted evaluations using state-of-the-art methods, including ViT5, BartPho, PhoBERT, mDeberta V3, and mBERT. These results contribute not only to the growing body of research on detecting text generated by AI but also demonstrate the adaptability and effectiveness of different methods in the Vietnamese language context. This research lays the foundation for future advancements in AI-generated text detection and provides valuable insights for researchers in the field of natural language processing

    Generating Counterfactual Explanations with Natural Language

    Full text link
    Natural language explanations of deep neural network decisions provide an intuitive way for a AI agent to articulate a reasoning process. Current textual explanations learn to discuss class discriminative features in an image. However, it is also helpful to understand which attributes might change a classification decision if present in an image (e.g., "This is not a Scarlet Tanager because it does not have black wings.") We call such textual explanations counterfactual explanations, and propose an intuitive method to generate counterfactual explanations by inspecting which evidence in an input is missing, but might contribute to a different classification decision if present in the image. To demonstrate our method we consider a fine-grained image classification task in which we take as input an image and a counterfactual class and output text which explains why the image does not belong to a counterfactual class. We then analyze our generated counterfactual explanations both qualitatively and quantitatively using proposed automatic metrics.Comment: presented at 2018 ICML Workshop on Human Interpretability in Machine Learning (WHI 2018), Stockholm, Swede

    AI-Generated Fashion Designs: Who or What Owns the Goods?

    Get PDF
    As artificial intelligence (“AI”) becomes an increasingly prevalent tool in a plethora of industries in today’s society, analyzing the potential legal implications attached to AI-generated works is becoming more popular. One of the industries impacted by AI is fashion. AI tools and devices are currently being used in the fashion industry to create fashion models, fabric designs, and clothing. An AI device’s ability to generate fashion designs raises the question of who will own the copyrights of the fashion designs. Will it be the fashion designer who hires or contracts with the AI device programmer? Will it be the programmer? Or will it be the AI device itself? Designers invest a lot of talent, time, and finances into designing and creating each article of clothing and accessory it releases to the public; yet, under the current copyright standards, designers will not likely be considered the authors of their creations. Ultimately, this Note makes policy proposals for future copyright legislation within the United States, particularly recommending that AI-generated and AI-assisted designs be copyrightable and owned by the designers who purchase the AI device

    Acquiring Correct Knowledge for Natural Language Generation

    Full text link
    Natural language generation (NLG) systems are computer software systems that produce texts in English and other human languages, often from non-linguistic input data. NLG systems, like most AI systems, need substantial amounts of knowledge. However, our experience in two NLG projects suggests that it is difficult to acquire correct knowledge for NLG systems; indeed, every knowledge acquisition (KA) technique we tried had significant problems. In general terms, these problems were due to the complexity, novelty, and poorly understood nature of the tasks our systems attempted, and were worsened by the fact that people write so differently. This meant in particular that corpus-based KA approaches suffered because it was impossible to assemble a sizable corpus of high-quality consistent manually written texts in our domains; and structured expert-oriented KA techniques suffered because experts disagreed and because we could not get enough information about special and unusual cases to build robust systems. We believe that such problems are likely to affect many other NLG systems as well. In the long term, we hope that new KA techniques may emerge to help NLG system builders. In the shorter term, we believe that understanding how individual KA techniques can fail, and using a mixture of different KA techniques with different strengths and weaknesses, can help developers acquire NLG knowledge that is mostly correct
    corecore