The advancement of AI since 2022 has led to increased usage in all aspects of the world around us. This introduces a new era of online threats due to misinformation. Using AI, bad actors can easily generate vast amounts of believable misinformation which can be used to manipulate public opinion. In this study, we evaluate various AI text detection models, along with circumvention techniques such as DFT fooler, complex paraphrasing, and humanizers which modify AI-generated text to circumvent detectors. We found that even advanced detection models such as GPTZero and ZeroGPT used by top universities were weak when challenged by DFT Fooler or humanizer models. While current detection methods are effective against simple texts, they need much improvement to face the challenges of real-world applications
Is data on this page outdated, violates copyrights or anything else? Report the problem now and we will take corresponding actions after reviewing your request.