11,721 research outputs found

    Static analysis for facilitating secure and reliable software

    Get PDF
    Software security and reliability are aspects of major concern for software development enterprises that wish to deliver dependable software to their customers. Several static analysis-based approaches for facilitating the development of secure and reliable software have been proposed over the years. The purpose of the present thesis is to investigate these approaches and to extend their state of the art by addressing existing open issues that have not been sufficiently addressed yet. To this end, an empirical study was initially conducted with the purpose to investigate the ability of software metrics (e.g., complexity metrics) to discriminate between different types of vulnerabilities, and to examine whether potential interdependencies exist between different vulnerability types. The results of the analysis revealed that software metrics can be used only as weak indicators of specific security issues, while important interdependencies may exist between different types of vulnerabilities. The study also verified the capacity of software metrics (including previously uninvestigated metrics) to indicate the existence of vulnerabilities in general. Subsequently, a hierarchical security assessment model able to quantify the internal security level of software products, based on static analysis alerts and software metrics is proposed. The model is practical, since it is fully-automated and operationalized in the form of individual tools, while it is also sufficiently reliable since it was built based on data and well-accepted sources of information. An extensive evaluation of the model on a large volume of empirical data revealed that it is able to reliably assess software security both at product- and at class-level of granularity, with sufficient discretion power, while it may be also used for vulnerability prediction. The experimental results also provide further support regarding the ability of static analysis alerts and software metrics to indicate the existence of software vulnerabilities. Finally, a mathematical model for calculating the optimum checkpoint interval, i.e., the checkpoint interval that minimizes the execution time of software programs that adopt the application-level checkpoint and restart (ALCR) mechanism was proposed. The optimum checkpoint interval was found to depend on the failure rate of the application, the execution cost for establishing a checkpoint, and the execution cost for restarting a program after failure. Emphasis was given on programs with loops, while the results were illustrated through several numerical examples.Open Acces

    Towards Validating Risk Indicators Based on Measurement Theory (Extended version)

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators. In practice it is common to validate risk indicators by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. For instance, in an extended enterprise this may mean over investing in service level agreements or obtaining a contract that provides a lower security level than the system requires. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk indicators that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of the measurement theory to risk indicators, we analyze the indicators used by a risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Estimating Impact and Frequency of Risks to Safety and Mission Critical Systems Using CVSS

    Get PDF
    Many safety and mission critical systems depend on the correct and secure operation of both supportive and core software systems. E.g., both the safety of personnel and the effective execution of core missions on an oil platform depend on the correct recording storing, transfer and interpretation of data, such as that for the Logging While Drilling (LWD) and Measurement While Drilling (MWD) subsystems. Here, data is recorded on site, packaged and then transferred to an on-shore operational centre. Today, the data is transferred on dedicated communication channels to ensure a secure and safe transfer, free from deliberately and accidental faults. However, as the cost control is ever more important some of the transfer will be over remotely accessible infrastructure in the future. Thus, communication will be prone to known security vulnerabilities exploitable by outsiders. This paper presents a model that estimates risk level of known vulnerabilities as a combination of frequency and impact estimates derived from the Common Vulnerability Scoring System (CVSS). The model is implemented as a Bayesian Belief Network (BBN)

    Towards Validating Risk Indicators Based on Measurement Theory

    Get PDF
    Due to the lack of quantitative information and for cost-efficiency purpose, most risk assessment methods use partially ordered values (e.g. high, medium, low) as risk indicators.\ud In practice it is common to validate risk scales by asking stakeholders whether they make sense. This way of validation is subjective, thus error prone. If the metrics are wrong (not meaningful), then they may lead system owners to distribute security investments inefficiently. Therefore, when validating risk assessment methods it is important to validate the meaningfulness of the risk scales that they use. In this paper we investigate how to validate the meaningfulness of risk indicators based on measurement theory. Furthermore, to analyze the applicability of measurement theory to risk indicators, we analyze the indicators used by a particular risk assessment method specially developed for assessing confidentiality risks in networks of organizations

    Structure Refinement for Vulnerability Estimation Models using Genetic Algorithm Based Model Generators

    Get PDF
    In this paper, a method for model structure refinement is proposed and applied in estimation of cumulative number of vulnerabilities according to time. Security as a quality characteristic is presented and defined. Vulnerabilities are defined and their importance is assessed. Existing models used for number of vulnerabilities estimation are enumerated, inspecting their structure. The principles of genetic model generators are inspected. Model structure refinement is defined in comparison with model refinement and a method for model structure refinement is proposed. A case study shows how the method is applied and the obtained results.model structure refinement, model generators, gene expression programming, software vulnerabilities, performance criteria, software metrics
    corecore