15 research outputs found

    Bridging the Gap: A Survey and Classification of Research-Informed Ethical Hacking Tools

    Get PDF
    The majority of Ethical Hacking (EH) tools utilised in penetration testing are developed by practitioners within the industry or underground communities. Similarly, academic researchers have also contributed to developing security tools. However, there appears to be limited awareness among practitioners of academic contributions in this domain, creating a significant gap between industry and academia’s contributions to EH tools. This research paper aims to survey the current state of EH academic research, primarily focusing on research-informed security tools. We categorise these tools into process-based frameworks (such as PTES and Mitre ATT&CK) and knowledge-based frameworks (such as CyBOK and ACM CCS). This classification provides a comprehensive overview of novel, research-informed tools, considering their functionality and application areas. The analysis covers licensing, release dates, source code availability, development activity, and peer review status, providing valuable insights into the current state of research in this field

    Guiding Quality Assurance Through Context Aware Learning

    Get PDF
    Software Testing is a quality control activity that, in addition to finding flaws or bugs, provides confidence in the software’s correctness. The quality of the developed software depends on the strength of its test suite. Mutation Testing has shown that it effectively guides in improving the test suite’s strength. Mutation is a test adequacy criterion in which test requirements are represented by mutants. Mutants are slight syntactic modifications of the original program that aim to introduce semantic deviations (from the original program) necessitating the testers to design tests to kill these mutants, i.e., to distinguish the observable behavior between a mutant and the original program. This process of designing tests to kill a mutant is iteratively performed for the entire mutant set, which results in augmenting the test suite, hence improving its strength. Although mutation testing is empirically validated, a key issue is that its application is expensive due to the large number of low-utility mutants that it introduces. Some mutants cannot be even killed as they are functionally equivalent to the original program. To reduce the application cost, it is imperative to limit the number of mutants to those that are actually useful. Since it requires manual analysis and test executions to identify such mutants, there is a lack of an effective solution to the problem. Hence, it remains unclear how to mutate and test a code efficiently. On the other hand, with the advancement in deep learning, several works in the literature recently focused on using it on source code to automate many nontrivial tasks including bug fixing, producing code comments, code completion, and program repair. The increasing utilization of deep learning is due to a combination of factors. The first is the vast availability of data to learn from, specifically source code in open-source repositories. The second is the availability of inexpensive hardware able to efficiently run deep learning infrastructures. The third and the most compelling is its ability to automatically learn the categorization of data by learning the code context through its hidden layer architecture, making it especially proficient in identifying features. Thus, we explore the possibility of employing deep learning to identify only useful mutants, in order to achieve a good trade-off between the invested effort and test effectiveness. Hence, as our first contribution, this dissertation proposes Cerebro, a deep learning approach to statically select subsuming mutants based on the mutants’ surrounding code context. As subsuming mutants reside at the top of the subsumption hierarchy, test cases designed to only kill this minimal subset of mutants kill all the remaining mutants. Our evaluation of Cerebro demonstrates that it preserves the mutation testing benefits while limiting the application cost, i.e., reducing all cost factors such as equivalent mutants, mutant executions, and the mutants requiring analysis. Apart from improving test suite strength, mutation testing has been proven useful in inferring software specifications. Software specifications aim at describing the software’s intended behavior and can be used to distinguish correct from incorrect software behaviors. Specification inference techniques aim at inferring assertions by generating and filtering candidate assertions through dynamic test executions and mutation testing. Due to the introduction of a large number of mutants during mutation testing such techniques are also computationally expensive, hence establishing a need for the selection of mutants that fit best for assertion inference. We refer to such mutants as Assertion Inferring Mutants. In our analysis, we find that the assertion inferring mutants are significantly different from the subsuming mutants. Thus, we explored the employability of deep learning to identify Assertion Inferring Mutants. Hence, as our second contribution, this dissertation proposes Seeker, a deep learning approach to statically select Assertion Inferring Mutants. Our evaluation demonstrates that Seeker enables an assertion inference capability comparable to the full mutation analysis while significantly limiting the execution cost. In addition to testing software in general, a few works in the literature attempt to employ mutation testing to tackle security-related issues, due to the fault-based nature of the technique. These works propose mutation operators to convert non-vulnerable code to vulnerable by mimicking common security bugs. However, these pattern-based approaches have two major limitations. Firstly, the design of security-specific mutation operators is not trivial. It requires manual analysis and comprehension of the vulnerability classes. Secondly, these mutation operators can alter the program semantics in a manner that is not convincing for developers and is perceived as unrealistic, thereby hindering the usability of the method. On the other hand, with the release of powerful language models trained on large code corpus, e.g. CodeBERT, a new family of mutation testing tools has arisen with the promise to generate natural mutants. We study the extent to which the mutants produced by language models can semantically mimic the behavior of vulnerabilities aka Vulnerability-mimicking Mutants. Designed test cases failed by these mutants will also tackle mimicked vulnerabilities. In our analysis, we found that a very small subset of mutants is vulnerability-mimicking. Though, this set mimics more than half of the vulnerabilities in our dataset. Due to the absence of any defined features to identify vulnerability-mimicking mutants, as our third contribution, this dissertation introduces Mystique, a deep learning approach that automatically extracts features to identify vulnerability-mimicking mutants. Despite the scarcity, Mystique predicts vulnerability-mimicking mutants with a high prediction performance, demonstrating that their features can be automatically learned by deep learning models to statically predict these without the need of investing any effort in defining features. Since our vulnerability-mimicking mutants cannot mimic all the vulnerabilities, we perceive that these mutants are not a complete representation of all the vulnerabilities and there exists a need for actual vulnerability prediction approaches. Although there exist many such approaches in the literature, their performance is limited due to a few factors. Firstly, vulnerabilities are fewer in comparison to software bugs, limiting the information one can learn from, which affects the prediction performance. Secondly, the existing approaches learn on both, vulnerable, and supposedly non-vulnerable components. This introduces an unavoidable noise in training data, i.e., components with no reported vulnerability are considered non-vulnerable during training, and hence, results in existing approaches performing poorly. We employed deep learning to automatically capture features related to vulnerabilities and explored if we can avoid learning on supposedly non-vulnerable components. Hence, as our final contribution, this dissertation proposes TROVON, a deep learning approach that learns only on components known to be vulnerable, thereby making no assumptions and bypassing the key problem faced by previous techniques. Our comparison of TROVON with existing techniques on security-critical open-source systems with historical vulnerabilities reported in the National Vulnerability Database (NVD) demonstrates that its prediction capability significantly outperforms the existing techniques

    Interdependent Security and Compliance in Service Selection

    Get PDF
    Application development today is characterized by ever shorter release cycles and more frequent change requests. Hence development methods such as service composition are increasingly arousing interest as viable alternative approaches. While employing web services as building blocks rapidly reduces development times, it raises new challenges regarding security and compliance since their implementation remains a black box which usually cannot be controlled. Security in particular gets even more challenging since some applications require domainspecific security objectives such as location privacy. Another important aspect is that security objectives are in general no singletons but subject to interdependence. Hence this thesis addresses the question of how to consider interdependent security and compliance in service composition. Current approaches for service composition do neither consider interdependent security nor compliance. Selecting suiting services for a composition is a combinatorial problem which is known to be NP-hard. Often this problem is solved utilizing genetic algorithms in order to obtain near-optimal solutions in reasonable time. This is particularly the case if multiple objectives have to be optimized simultaneously such as price, runtime and data encryption strength. Security properties of compositions are usually verified using formal methods. However, none of the available methods supports interdependence effects or defining arbitrary security objectives. Similarly, no current approach ensures compliance of service compositions during service selection. Instead, compliance is verified afterwards which might necessitate repeating the selection process in case of a non-compliant solution. In this thesis, novel approaches for considering interdependent security and compliance in service composition are being presented and discussed. Since no formal methods exist covering interdependence effects for security, this aspect is covered in terms of a security assessment. An assessment method is developed which builds upon the notion of structural decomposition in order to assess the fulfillment of arbitrary security objectives in terms of a utility function. Interdependence effects are being modeled as dependencies between utility functions. In order to enable compliance-awareness, an approach is presented which checks compliance of compositions during service selection and marks non-compliant parts. This enables to repair the corresponding parts during the selection process by replacing the current services and hence avoids the necessity to repeat the selection process. It is demonstrated how to embed the presented approaches into a genetic algorithm in order to ease integration with existing approaches for service composition. The developed approaches are being compared to state-of-the-art genetic algorithms using simulations

    Tackling the Challenges of Information Security Incident Reporting: A Decentralized Approach

    Get PDF
    Information security incident under-reporting is unambiguously a business problem, as identified by a variety of sources, such as ENISA (2012), Symantec (2016), Newman (2018) and more. This research project identified the underlying issues that cause this problem and proposed a solution, in the form of an innovative artefact, which confronts a number of these issues. This research project was conducted according to the requirements of the Design Science Research Methodology (DSRM) by Peffers et al (2007). The research question set at the beginning of this research project, probed the feasible formation of an incident reporting solution, which would increase the motivational level of users towards the reporting of incidents, by utilizing the positive features offered by existing solutions, on one hand, but also by providing added value to the users, on the other. The comprehensive literature review chapter set the stage, and identified the reasons for incident underreporting, while also evaluating the existing solutions and determining their advantages and disadvantages. The objectives of the proposed artefact were then set, and the artefact was designed and developed. The output of this development endeavour is “IRDA”, the first decentralized incident reporting application (DApp), built on “Quorum”, a permissioned blockchain implementation of Ethereum. Its effectiveness was demonstrated, when six organizations accepted to use the developed artefact and performed a series of pre-defined actions, in order to confirm the platform’s intended functionality. The platform was also evaluated using Venable et al’s (2012) evaluation framework for DSR projects. This research project contributes to knowledge in various ways. It investigates blockchain and incident reporting, two domains which have not been extensively examined and the available literature is rather limited. Furthermore, it also identifies, compares, and evaluates the conventional, reporting platforms, available, up to date. In line with previous findings (e.g Humphrey, 2017), it also confirms the lack of standard taxonomies for information security incidents. This work also contributes by creating a functional, practical artefact in the blockchain domain, a domain where, according to Taylor et al (2019), most studies are either experimental proposals, or theoretical concepts, with limited practicality in solving real-world problems. Through the evaluation activity, and by conducting a series of non-parametric significance tests, it also suggests that IRDA can potentially increase the motivational level of users towards the reporting of incidents. This thesis describes an original attempt in utilizing the newly emergent blockchain technology, and its inherent characteristics, for addressing those concerns which actively contribute to the business problem. To the best of the researcher’s knowledge, there is currently no other solution offering similar benefits to users/organizations for incident reporting purposes. Through the accomplishment of this project’s pre-set objectives, the developed artefact provides a positive answer to the research question. The artefact, featuring increased anonymity, availability, immutability and transparency levels, as well as an overall lower cost, has the potential to increase the motivational level of organizations towards the reporting of incidents, thus improving the currently dismaying statistics of incident under-reporting. The structure of this document follows the flow of activities described in the DSRM by Peffers et al (2007), while also borrowing some elements out of the nominal structure of an empirical research process, including the literature review chapter, the description of the selected research methodology, as well as the “discussion and conclusion” chapter

    The global vulnerability discovery and disclosure system: a thematic system dynamics approach

    Get PDF
    Vulnerabilities within software are the fundamental issue that provide both the means, and opportunity for malicious threat actors to compromise critical IT systems (Younis et al., 2016). Consequentially, the reduction of vulnerabilities within software should be of paramount importance, however, it is argued that software development practitioners have historically failed in reducing the risks associated with software vulnerabilities. This failure is illustrated in, and by the growth of software vulnerabilities over the past 20 years. This increase which is both unprecedented and unwelcome has led to an acknowledgement that novel and radical approaches to both understand the vulnerability discovery and disclosure system (VDDS) and to mitigate the risks associate with software vulnerability centred risk is needed (Bradbury, 2015; Marconato et al., 2012). The findings from this research show that whilst technological mitigations are vital, the social and economic features of the VDDS are of critical importance. For example, hitherto unknown systemic themes identified by this research are of key and include; Perception of Punishment; Vendor Interactions; Disclosure Stance; Ethical Considerations; Economic factors for Discovery and Disclosure and Emergence of New Vulnerability Markets. Each theme uniquely impacts the system, and ultimately the scale of vulnerability based risks. Within the research each theme within the VDDS is represented by several key variables which interact and shape the system. Specifically: Vender Sentiment; Vulnerability Removal Rate; Time to fix; Market Share; Participants within VDDS, Full and Coordinated Disclosure Ratio and Participant Activity. Each variable is quantified and explored, defining both the parameter space and progression over time. These variables are utilised within a system dynamic model to simulate differing policy strategies and assess the impact of these policies upon the VDDS. Three simulated vulnerability disclosure futures are hypothesised and are presented, characterised as depletion, steady and exponential with each scenario dependent upon the parameter space within the key variables

    17th SC@RUG 2020 proceedings 2019-2020

    Get PDF

    3D-in-2D Displays for ATC.

    Get PDF
    This paper reports on the efforts and accomplishments of the 3D-in-2D Displays for ATC project at the end of Year 1. We describe the invention of 10 novel 3D/2D visualisations that were mostly implemented in the Augmented Reality ARToolkit. These prototype implementations of visualisation and interaction elements can be viewed on the accompanying video. We have identified six candidate design concepts which we will further research and develop. These designs correspond with the early feasibility studies stage of maturity as defined by the NASA Technology Readiness Level framework. We developed the Combination Display Framework from a review of the literature, and used it for analysing display designs in terms of display technique used and how they are combined. The insights we gained from this framework then guided our inventions and the human-centered innovation process we use to iteratively invent. Our designs are based on an understanding of user work practices. We also developed a simple ATC simulator that we used for rapid experimentation and evaluation of design ideas. We expect that if this project continues, the effort in Year 2 and 3 will be focus on maturing the concepts and employment in a operational laboratory settings
    corecore