4 research outputs found

    Ranking Secure Coding Guidelines for Software Developer Awareness Training in the Industry

    Get PDF
    Secure coding guidelines are essential material used to train and raise awareness of software developers on the topic of secure software development. In industrial environments, since developer time is costly, and training and education is part of non-productive hours, it is important to address and stress the most important topics first. In this work, we devise a method, based on publicly available real-world vulnerability databases and secure coding guideline databases, to rank important secure coding guidelines based on defined industry-relevant metrics. The goal is to define priorities for a teaching curriculum on raising cybersecurity awareness of software developers on secure coding guidelines. Furthermore, we do a small comparison study by asking computer science students from university on how they rank the importance of secure coding guidelines and compare the outcome to our results

    Towards Visual Analytics Dashboards for Provenance-driven Static Application Security Testing

    Get PDF
    The use of static code analysis tools for security audits can be time consuming, as the many existing tools focus on different aspects and therefore development teams often use several of these tools to keep code quality high and prevent security issues. Displaying the results of multiple tools, such as code smells and security warnings, in a unified interface can help developers get a better overview and prioritize upcoming work. We present visualizations and a dashboard that interactively display results from static code analysis for “interesting” commits during development. With this, we aim to provide an effective visual analytics tool for code security analysis results

    Design of secure and robust cognitive system for malware detection

    Full text link
    Machine learning based malware detection techniques rely on grayscale images of malware and tends to classify malware based on the distribution of textures in graycale images. Albeit the advancement and promising results shown by machine learning techniques, attackers can exploit the vulnerabilities by generating adversarial samples. Adversarial samples are generated by intelligently crafting and adding perturbations to the input samples. There exists majority of the software based adversarial attacks and defenses. To defend against the adversaries, the existing malware detection based on machine learning and grayscale images needs a preprocessing for the adversarial data. This can cause an additional overhead and can prolong the real-time malware detection. So, as an alternative to this, we explore RRAM (Resistive Random Access Memory) based defense against adversaries. Therefore, the aim of this thesis is to address the above mentioned critical system security issues. The above mentioned challenges are addressed by demonstrating proposed techniques to design a secure and robust cognitive system. First, a novel technique to detect stealthy malware is proposed. The technique uses malware binary images and then extract different features from the same and then employ different ML-classifiers on the dataset thus obtained. Results demonstrate that this technique is successful in differentiating classes of malware based on the features extracted. Secondly, I demonstrate the effects of adversarial attacks on a reconfigurable RRAM-neuromorphic architecture with different learning algorithms and device characteristics. I also propose an integrated solution for mitigating the effects of the adversarial attack using the reconfigurable RRAM architecture.Comment: arXiv admin note: substantial text overlap with arXiv:2104.0665

    Understanding How Reverse Engineers Make Sense of Programs from Assembly Language Representations

    Get PDF
    This dissertation develops a theory of the conceptual and procedural aspects involved with how reverse engineers make sense of executable programs. Software reverse engineering is a complex set of tasks which require a person to understand the structure and functionality of a program from its assembly language representation, typically without having access to the program\u27s source code. This dissertation describes the reverse engineering process as a type of sensemaking, in which a person combines reasoning and information foraging behaviors to develop a mental model of the program. The structure of knowledge elements used in making sense of executable programs are elicited from a case study, interviews with subject matter experts, and observational studies with software reverse engineers. The results from this research can be used to improve reverse engineering tools, to develop training requirements for reverse engineers, and to develop robust computational models of human comprehension in complex tasks where sensemaking is required
    corecore