218 research outputs found

    Vulnerabilities mapping based on OWASP-SANS: a survey for Static Application Security Testing (SAST)

    Get PDF
    The delivery of a framework in place for secure application development is of real value for application development teams to integrate security into their development life cycle, especially when a mobile or web application moves past the scanning stage and focuses increasingly on the remediation or mitigation phase based on static application security testing(SAST). For the first time, to the author’s knowledge, the industry-standard Open Web Application Security Project(OWASP)top 10 vulnerabilities and CWE/SANS top 25 most dangerous software errors are synced up in a matrix with Checkmarx vulnerability queries, producing anapplication security framework that helps development teams review and address code vulnerabilities, minimise false positives discovered in static scans and penetration tests, targeting an increased accuracy of the findings. A case study is conducted for vulnerabilities scanning of a proof-of-concept mobile malware detection app. Mapping the OWASP/SANS with Check marx vulnerabilities queries,flaws and vulnerabilities are demonstrated to be mitigated with improved efficiency

    Vulnerability assessment of Angolan university web applications

    Get PDF
    Vulnerability assessment is one of the technical procedures that can help prevent serious security breaches, which, when exploited, can undermine brand credibility and or the continuity of a business. Universities hold and process important relevant and sensitive student and staff information appealing to attackers and might affect the organisations' credibility if such information is disclosed. This work presents a study conducted to assess the security status of the Angolan universities' web applications, identifying the most frequent security vulnerabilities and their criticality, based on OWASP Top 10 and CWE Top 25 references to identify and validate the findings discovered during the automatic vulnerability assessment process.info:eu-repo/semantics/acceptedVersio

    The FormAI Dataset: Generative AI in Software Security Through the Lens of Formal Verification

    Full text link
    This paper presents the FormAI dataset, a large collection of 112, 000 AI-generated compilable and independent C programs with vulnerability classification. We introduce a dynamic zero-shot prompting technique constructed to spawn diverse programs utilizing Large Language Models (LLMs). The dataset is generated by GPT-3.5-turbo and comprises programs with varying levels of complexity. Some programs handle complicated tasks like network management, table games, or encryption, while others deal with simpler tasks like string manipulation. Every program is labeled with the vulnerabilities found within the source code, indicating the type, line number, and vulnerable function name. This is accomplished by employing a formal verification method using the Efficient SMT-based Bounded Model Checker (ESBMC), which uses model checking, abstract interpretation, constraint programming, and satisfiability modulo theories to reason over safety/security properties in programs. This approach definitively detects vulnerabilities and offers a formal model known as a counterexample, thus eliminating the possibility of generating false positive reports. We have associated the identified vulnerabilities with Common Weakness Enumeration (CWE) numbers. We make the source code available for the 112, 000 programs, accompanied by a separate file containing the vulnerabilities detected in each program, making the dataset ideal for training LLMs and machine learning algorithms. Our study unveiled that according to ESBMC, 51.24% of the programs generated by GPT-3.5 contained vulnerabilities, thereby presenting considerable risks to software safety and security.Comment: https://github.com/FormAI-Datase

    Automated CVE Analysis for Threat Prioritization and Impact Prediction

    Full text link
    The Common Vulnerabilities and Exposures (CVE) are pivotal information for proactive cybersecurity measures, including service patching, security hardening, and more. However, CVEs typically offer low-level, product-oriented descriptions of publicly disclosed cybersecurity vulnerabilities, often lacking the essential attack semantic information required for comprehensive weakness characterization and threat impact estimation. This critical insight is essential for CVE prioritization and the identification of potential countermeasures, particularly when dealing with a large number of CVEs. Current industry practices involve manual evaluation of CVEs to assess their attack severities using the Common Vulnerability Scoring System (CVSS) and mapping them to Common Weakness Enumeration (CWE) for potential mitigation identification. Unfortunately, this manual analysis presents a major bottleneck in the vulnerability analysis process, leading to slowdowns in proactive cybersecurity efforts and the potential for inaccuracies due to human errors. In this research, we introduce our novel predictive model and tool (called CVEDrill) which revolutionizes CVE analysis and threat prioritization. CVEDrill accurately estimates the CVSS vector for precise threat mitigation and priority ranking and seamlessly automates the classification of CVEs into the appropriate CWE hierarchy classes. By harnessing CVEDrill, organizations can now implement cybersecurity countermeasure mitigation with unparalleled accuracy and timeliness, surpassing in this domain the capabilities of state-of-the-art tools like ChaptGPT

    A serious game for teaching Java cybersecurity in the industry with an intelligent coach

    Get PDF
    Cybersecurity as been gaining more and more attention over the past years. Nowadays we continue to see a rise in the number of known vulnerabilities and successful cyber-attacks. Several studies show that one of the causes of these problems is the lack of awareness of software developers. If software developers are not aware of how to write secure code they can unknowingly add vulnerabilities to software. This research focuses on raising Java developers cybersecurity awareness by employing a serious game type of approach. Our artifact, the Java Cybersecurity Challenges, consist of programming exercises that intend to give software developers hands-on experience with security related vulnerabilities in the Java programming language. Our designed solution includes an intelligent coach that aims at helping players understand the vulnerabilities and solve the challenges. The present research was conducted using the Action Design Research methodology. This methodology allowed us to reach a useful solution, to the encountered problem, by applying an iterative development approach. Our results show that the developed final artifact is a good method to answer the defined problem and has been accepted and incorporated in an industry training program. This work contributes to researchers and practitioners through a detailed description on the implementation of an automatic code analysis and feedback process to evaluate the security level of the Java Cybersecurity Challenges.A cibersegurança tem vindo a ganhar mais importância nos últimos anos. Hoje em dia, continuamos a ver um aumento no número de vulnerabilidades conhecidas e ataques cibernéticos bem-sucedidos. Vários estudos mostram que uma das causas desses problemas é a falta de consciência dos programadores de software em termos de segurança. Ao não estarem cientes de como escrever código seguro, os programadores podem adicionar vulnerabilidades ao software sem saber. Este estudo foca-se em aumentar a conciencia dos programadores de software de Java, no que toca à segurança cibernética, através de uma abordagem baseada em jogos sérios. O nosso artefacto Java Cybersecurity Challenges, consiste em exercícios de programação que pretendem providenciar aos programadores de software com uma experiência prática sobre vulnerabilidades relacionadas à segurança da linguagem de programação Java. A solução desenvolvida inclui um treinador inteligente que visa ajudar os jogadores a compreender as vulnerabilidades e a resolver os exercícios. Esta pesquisa foi desenvolvida com base na metodologia Action Design Research. Esta metodologia permitiu-nos chegar a uma solução útil, para o problema encontrado, aplicando uma abordagem de desenvolvimento iterativa. Os nossos resultados mostram que o artefacto desenvolvido é um bom método para responder ao problema definido e foi aceite e incorporado num programa de treino da indústria. Este trabalho contribui para investigadores e praticantes através de uma descrição detalhada sobre a implementação de um processo de análise automática de código, bem como de feedback, para avaliar o nível de segurança dos Java Cybersecurity Challenges

    LLM for SoC Security: A Paradigm Shift

    Full text link
    As the ubiquity and complexity of system-on-chip (SoC) designs increase across electronic devices, the task of incorporating security into an SoC design flow poses significant challenges. Existing security solutions are inadequate to provide effective verification of modern SoC designs due to their limitations in scalability, comprehensiveness, and adaptability. On the other hand, Large Language Models (LLMs) are celebrated for their remarkable success in natural language understanding, advanced reasoning, and program synthesis tasks. Recognizing an opportunity, our research delves into leveraging the emergent capabilities of Generative Pre-trained Transformers (GPTs) to address the existing gaps in SoC security, aiming for a more efficient, scalable, and adaptable methodology. By integrating LLMs into the SoC security verification paradigm, we open a new frontier of possibilities and challenges to ensure the security of increasingly complex SoCs. This paper offers an in-depth analysis of existing works, showcases practical case studies, demonstrates comprehensive experiments, and provides useful promoting guidelines. We also present the achievements, prospects, and challenges of employing LLM in different SoC security verification tasks.Comment: 42 page
    • …
    corecore