837,989 research outputs found

    Penetration Testing Frameworks and methodologies: A comparison and evaluation

    Get PDF
    Cyber security is fast becoming a strategic priority across both governments and private organisations. With technology abundantly available, and the unbridled growth in the size and complexity of information systems, cyber criminals have a multitude of targets. Therefore, cyber security assessments are becoming common practice as concerns about information security grow. Penetration testing is one strategy used to mitigate the risk of cyber-attack. Penetration testers attempt to compromise systems using the same tools and techniques as malicious attackers thus, aim to identify vulnerabilities before an attack occurs. Penetration testing can be complex depending on the scope and domain area under investigation, for this reason it is often managed similarly to that of a project necessitating the implementation of some framework or methodology. Fortunately, there are an array of penetration testing methodologies and frameworks available to facilitate such projects, however, determining what is a framework and what is methodology within this context can lend itself to uncertainty. Furthermore, little exists in relation to mature frameworks whereby quality can be measured. This research defines the concept of “methodology” and “framework” within a penetration testing context. In addition, the research presents a gap analysis of the theoretical vs. the practical classification of nine penetration testing frameworks and/or methodologies and subsequently selects two frameworks to undergo quality evaluation using a realworld case study. Quality characteristics were derived from a review of four quality models, thus building the foundation for a proposed penetration testing quality model. The penetration testing quality model is a modified version of an ISO quality model whereby the two chosen frameworks underwent quality evaluation. Defining methodologies and frameworks for the purposes of penetration testing was achieved. A suitable definition was formed by way of analysing properties of each category respectively, thus a Framework vs. Methodology Characteristics matrix is presented. Extending upon the nomenclature resolution, a gap analysis was performed to determine if a framework is actually a framework, i.e., it has a sound underlying ontology. In contrast, many “frameworks” appear to be simply collections of tools or techniques. In addition, two frameworks OWASP’s Testing Guide and Information System Security Assessment Framework (ISSAF), were employed to perform penetration tests based on a real-world case study to facilitate quality evaluation based on a proposed quality model. The research suggests there are various ways in which quality for penetration testing frameworks can be measured; therefore concluded that quality evaluation is possible

    Model based testing for agent systems

    Get PDF
    Although agent technology is gaining world wide popularity, a hindrance to its uptake is the lack of proper testing mechanisms for agent based systems. While many traditional software testing methods can be generalized to agent systems, there are many aspects that are different and which require an understanding of the underlying agent paradigm. In this paper we present certain aspects of a testing framework that we have developed for agent based systems. The testing framework is a model based approach using the design models of the Prometheus agent development methodology. In this paper we focus on unit testing and identify the appropriate units, present mechanisms for generating suitable test cases and for determining the order in which the units are to be tested, present a brief overview of the unit testing process and an example. Although we use the design artefacts from Prometheus the approach is suitable for any plan and event based agent system

    A Fast and Accurate Cost Model for FPGA Design Space Exploration in HPC Applications

    Get PDF
    Heterogeneous High-Performance Computing (HPC) platforms present a significant programming challenge, especially because the key users of HPC resources are scientists, not parallel programmers. We contend that compiler technology has to evolve to automatically create the best program variant by transforming a given original program. We have developed a novel methodology based on type transformations for generating correct-by-construction design variants, and an associated light-weight cost model for evaluating these variants for implementation on FPGAs. In this paper we present a key enabler of our approach, the cost model. We discuss how we are able to quickly derive accurate estimates of performance and resource-utilization from the design’s representation in our intermediate language. We show results confirming the accuracy of our cost model by testing it on three different scientific kernels. We conclude with a case-study that compares a solution generated by our framework with one from a conventional high-level synthesis tool, showing better performance and power-efficiency using our cost model based approach

    On the power of conditional independence testing under model-X

    Full text link
    For testing conditional independence (CI) of a response Y and a predictor X given covariates Z, the recently introduced model-X (MX) framework has been the subject of active methodological research, especially in the context of MX knockoffs and their successful application to genome-wide association studies. In this paper, we study the power of MX CI tests, yielding quantitative explanations for empirically observed phenomena and novel insights to guide the design of MX methodology. We show that any valid MX CI test must also be valid conditionally on Y and Z; this conditioning allows us to reformulate the problem as testing a point null hypothesis involving the conditional distribution of X. The Neyman-Pearson lemma then implies that the conditional randomization test (CRT) based on a likelihood statistic is the most powerful MX CI test against a point alternative. We also obtain a related optimality result for MX knockoffs. Switching to an asymptotic framework with arbitrarily growing covariate dimension, we derive an expression for the limiting power of the CRT against local semiparametric alternatives in terms of the prediction error of the machine learning algorithm on which its test statistic is based. Finally, we exhibit a resampling-free test with uniform asymptotic Type-I error control under the assumption that only the first two moments of X given Z are known, a significant relaxation of the MX assumption

    Investment in R&D, Costs of Adjustment and Expectations

    Get PDF
    This paper proposes a framework which integrates convex costs of adjustment and expectations formation in the determination of investment decisions in R&D at the firm level. The model is based on cost minimization subject to the firm's expectations of the stream of output and the price of R&D, and results in equations for actual and multiple-span planned investment in R&D and for the realization error as functions of these expectations. The model accommodates alternative mechanisms of expectations formation and provides a methodology for testing these hypotheses empirically. We derive estimable equations and testable parameter restrictions for the rational, adaptive and static expectations hypotheses. The empirical results using pooled firm data strongly reject the rational and static expectations hypotheses and generally support adaptive expectations.

    Goodness-of-fit testing in high dimensional generalized linear models

    Get PDF
    We propose a family of tests to assess the goodness-of-fit of a high-dimensional generalized linear model. Our framework is flexible and may be used to construct an omnibus test or directed against testing specific non-linearities and interaction effects, or for testing the significance of groups of variables. The methodology is based on extracting left-over signal in the residuals from an initial fit of a generalized linear model. This can be achieved by predicting this signal from the residuals using modern flexible regression or machine learning methods such as random forests or boosted trees. Under the null hypothesis that the generalized linear model is correct, no signal is left in the residuals and our test statistic has a Gaussian limiting distribution, translating to asymptotic control of type I error. Under a local alternative, we establish a guarantee on the power of the test. We illustrate the effectiveness of the methodology on simulated and real data examples by testing goodness-of-fit in logistic regression models. Software implementing the methodology is available in the R package `GRPtests'
    • …
    corecore