2 research outputs found

    Identifying Authorship Style in Malicious Binaries: Techniques, Challenges & Datasets

    Get PDF
    Attributing a piece of malware to its creator typically requires threat intelligence. Binary attribution increases the level of difficulty as it mostly relies upon the ability to disassemble binaries to identify authorship style. Our survey explores malicious author style and the adversarial techniques used by them to remain anonymous. We examine the adversarial impact on the state-of-the-art methods. We identify key findings and explore the open research challenges. To mitigate the lack of ground truth datasets in this domain, we publish alongside this survey the largest and most diverse meta-information dataset of 15,660 malware labeled to 164 threat actor groups

    Security and Authenticity of AI-generated code

    Get PDF
    The intersection of security and plagiarism in the context of AI-generated code is a critical theme through- out this study. While our research primarily focuses on evaluating the security aspects of AI-generated code, it is imperative to recognize the interconnectedness of security and plagiarism concerns. On the one hand, we do an extensive analysis of the security flaws that might be present in AI-generated code, with a focus on code produced by ChatGPT and Bard. This analysis emphasizes the dangers that might occur if such code is incorporated into software programs, especially if it has security weaknesses. This directly affects developers, advising them to use caution when thinking about integrating AI-generated code to protect the security of their applications. On the other hand, our research also covers code plagiarism. In the context of AI-generated code, plagiarism, which is defined as the reuse of code without proper attribution or in violation of license and copyright restrictions, becomes a significant concern. As open-source software and AI language models proliferate, the risk of plagiarism in AI-generated code increases. Our research combines code attribution techniques to identify the authors of AI-generated insecure code and identify where the code originated. Our research emphasizes the multidimensional nature of AI-generated code and its wide-ranging repercussions by addressing both security and plagiarism issues at the same time. This complete approach adds to a more profound understanding of the problems and ethical implications associated with the use of AI in code generation, embracing both security and authorship-related concerns
    corecore