288,329 research outputs found

    Measuring software security from the design of software

    Get PDF
    The vast majority of our contemporary society owns a mobile phone, which has resulted in a dramatic rise in the amount of networked computers in recent years. Security issues in the computers have followed the same trend and nearly everyone is now affected by such issues. How could the situation be improved? For software engineers, an obvious answer is to build computer software with security in mind. A problem with building software with security is how to define secure software or how to measure security. This thesis divides the problem into three research questions. First, how can we measure the security of software? Second, what types of tools are available for measuring security? And finally, what do these tools reveal about the security of software? Measuring tools of these kind are commonly called metrics. This thesis is focused on the perspective of software engineers in the software design phase. Focus on the design phase means that code level semantics or programming language specifics are not discussed in this work. Organizational policy, management issues or software development process are also out of the scope. The first two research problems were studied using a literature review while the third was studied using a case study research. The target of the case study was a Java based email server called Apache James, which had details from its changelog and security issues available and the source code was accessible. The research revealed that there is a consensus in the terminology on software security. Security verification activities are commonly divided into evaluation and assurance. The focus of this work was in assurance, which means to verify one’s own work. There are 34 metrics available for security measurements, of which five are evaluation metrics and 29 are assurance metrics. We found, however, that the general quality of these metrics was not good. Only three metrics in the design category passed the inspection criteria and could be used in the case study. The metrics claim to give quantitative information on the security of the software, but in practice they were limited to evaluating different versions of the same software. Apart from being relative, the metrics were unable to detect security issues or point out problems in the design. Furthermore, interpreting the metrics’ results was difficult. In conclusion, the general state of the software security metrics leaves a lot to be desired. The metrics studied had both theoretical and practical issues, and are not suitable for daily engineering workflows. The metrics studied provided a basis for further research, since they pointed out areas where the security metrics were necessary to improve whether verification of security from the design was desired.Siirretty Doriast

    Towards a more systematic approach to secure systems design and analysis

    Get PDF
    The task of designing secure software systems is fraught with uncertainty, as data on uncommon attacks is limited, costs are difficult to estimate, and technology and tools are continually changing. Consequently, experts may interpret the security risks posed to a system in different ways, leading to variation in assessment. This paper presents research into measuring the variability in decision making between security professionals, with the ultimate goal of improving the quality of security advice given to software system designers. A set of thirty nine cyber-security experts took part in an exercise in which they independently assessed a realistic system scenario. This study quantifies agreement in the opinions of experts, examines methods of aggregating opinions, and produces an assessment of attacks from ratings of their components. We show that when aggregated, a coherent consensus view of security emerges which can be used to inform decisions made during systems design

    Information Security Risk Management: In Which Security Solutions Is It Worth Investing?

    Get PDF
    As companies are increasingly exposed to information security threats, decision makers are permanently forced to pay attention to security issues. Information security risk management provides an approach for measuring the security through risk assessment, risk mitigation, and risk evaluation. Although a variety of approaches have been proposed, decision makers lack well-founded techniques that (1) show them what they are getting for their investment, (2) show them if their investment is efficient, and (3) do not demand in-depth knowledge of the IT security domain. This article defines a methodology for management decision makers that effectively addresses these problems. This work involves the conception, design, and implementation of the methodology into a software solution. The results from two qualitative case studies show the advantages of this methodology in comparison to established methodologies

    New Approaches to Software Security Metrics and Measurements

    Get PDF
    Meaningful metrics and methods for measuring software security would greatly improve the security of software ecosystems. Such means would make security an observable attribute, helping users make informed choices and allowing vendors to ‘charge’ for it—thus, providing strong incentives for more security investment. This dissertation presents three empirical measurement studies introducing new approaches to measuring aspects of software security, focusing on Free/Libre and Open Source Software (FLOSS). First, to revisit the fundamental question of whether software is maturing over time, we study the vulnerability rate of packages in stable releases of the Debian GNU/Linux software distribution. Measuring the vulnerability rate through the lens of Debian stable: (a) provides a natural time frame to test for maturing behavior, (b) reduces noise and bias in the data (only CVEs with a Debian Security Advisory), and (c) provides a best-case assessment of maturity (as the Debian release cycle is rather conservative). Overall, our results do not support the hypothesis that software in Debian is maturing over time, suggesting that vulnerability finding-and-fixing does not scale and more effort should be invested in significantly reducing the introduction rate of vulnerabilities, e.g. via ‘security by design’ approaches like memory-safe programming languages. Second, to gain insights beyond the number of reported vulnerabilities, we study how long vulnerabilities remain in the code of popular FLOSS projects (i.e. their lifetimes). We provide the first, to the best of our knowledge, method for automatically estimating the mean lifetime of a set of vulnerabilities based on information in vulnerability-fixing commits. Using this method, we study the lifetimes of ~6 000 CVEs in 11 popular FLOSS projects. Among a number of findings, we identify two quantities of particular interest for software security metrics: (a) the spread between mean vulnerability lifetime and mean code age at the time of fix, and (b) the rate of change of the aforementioned spread. Third, to gain insights into the important human aspect of the vulnerability finding process, we study the characteristics of vulnerability reporters for 4 popular FLOSS projects. We provide the first, to the best of our knowledge, method to create a large dataset of vulnerability reporters (>2 000 reporters for >4 500 CVEs) by combining information from a number of publicly available online sources. We proceed to analyze the dataset and identify a number of quantities that, suitably combined, can provide indications regarding the health of a project’s vulnerability finding ecosystem. Overall, we showed that measurement studies carefully designed to target crucial aspects of the software security ecosystem can provide valuable insights and indications regarding the ‘quality of security’ of software. However, the road to good security metrics is still long. New approaches covering other important aspects of the process are needed, while the approaches introduced in this dissertation should be further developed and improved

    SIP registration stress test

    Get PDF
    This paper deals with a benchmarking of SIP infrastructure and improves the methodology of SIP performance evaluation further to better fit into the design of the SIP testing platform, which is being designed in the VSB – Technical University of Ostrava. By separating registrations from calls, we were able to measure both cases without the need of extensive postprocessing of data to ensure the data in one case is not affected by the ones from the other case. Moreover the security vulnerability of the SIP protocol has been harnessed to allow measuring software for performing both registrations and calls together but in individual processes, which builds the basis for planned and already mentioned modular design of the platform. In this paper, we present the results from separate registration stress tests and we explain the usage of the proposed SIP benchmarking methodology

    Putting Security on the Table: The Digitalisation of Security Tabletop Games and its Challenging Aftertaste

    Get PDF
    IT-Security Tabletop Games for developers have been available in analog format; with the COVID-19 pandemic, interest in collaborative remote security games has increased. In this paper, we propose a methodology to evaluate the impact of a (remote) security game-based intervention on developers. The study design consists of the respective intervention, three questionnaires, and a small open interview guide for a focus group. A validated self-efficacy scale is used as a proxy for measuring effects on participants' ability to develop secure software. We tested this design with 9 participants (expert and novice developers and security experts) as part of a small feasibility study to understand the challenges and limitations of remote tabletop games. We describe how we selected and digitalised three security tabletop games, and report the qualitative findings from our evaluation. Setting up and running the virtual tabletop games turned out to be more challenging and complex for both moderator and participants than we expected. Completing the games required patience and persistence, and social interaction was limited. Our findings can be helpful in building and evaluating a better, more comprehensive, technically sound and issue-specific game-based training measure for developers. The methodology can be used by researchers to evaluate existing and new game designs

    BIOLOGICAL INSPIRED INTRUSION PREVENTION AND SELF-HEALING SYSTEM FOR CRITICAL SERVICES NETWORK

    Get PDF
    With the explosive development of the critical services network systems and Internet, the need for networks security systems have become even critical with the enlargement of information technology in everyday life. Intrusion Prevention System (IPS) provides an in-line mechanism focus on identifying and blocking malicious network activity in real time. This thesis presents new intrusion prevention and self-healing system (SH) for critical services network security. The design features of the proposed system are inspired by the human immune system, integrated with pattern recognition nonlinear classification algorithm and machine learning. Firstly, the current intrusions preventions systems, biological innate and adaptive immune systems, autonomic computing and self-healing mechanisms are studied and analyzed. The importance of intrusion prevention system recommends that artificial immune systems (AIS) should incorporate abstraction models from innate, adaptive immune system, pattern recognition, machine learning and self-healing mechanisms to present autonomous IPS system with fast and high accurate detection and prevention performance and survivability for critical services network system. Secondly, specification language, system design, mathematical and computational models for IPS and SH system are established, which are based upon nonlinear classification, prevention predictability trust, analysis, self-adaptation and self-healing algorithms. Finally, the validation of the system carried out by simulation tests, measuring, benchmarking and comparative studies. New benchmarking metrics for detection capabilities, prevention predictability trust and self-healing reliability are introduced as contributions for the IPS and SH system measuring and validation. Using the software system, design theories, AIS features, new nonlinear classification algorithm, and self-healing system show how the use of presented systems can ensure safety for critical services networks and heal the damage caused by intrusion. This autonomous system improves the performance of the current intrusion prevention system and carries on system continuity by using self-healing mechanism

    S-FaaS: Trustworthy and Accountable Function-as-a-Service using Intel SGX

    Full text link
    Function-as-a-Service (FaaS) is a recent and already very popular paradigm in cloud computing. The function provider need only specify the function to be run, usually in a high-level language like JavaScript, and the service provider orchestrates all the necessary infrastructure and software stacks. The function provider is only billed for the actual computational resources used by the function invocation. Compared to previous cloud paradigms, FaaS requires significantly more fine-grained resource measurement mechanisms, e.g. to measure compute time and memory usage of a single function invocation with sub-second accuracy. Thanks to the short duration and stateless nature of functions, and the availability of multiple open-source frameworks, FaaS enables non-traditional service providers e.g. individuals or data centers with spare capacity. However, this exacerbates the challenge of ensuring that resource consumption is measured accurately and reported reliably. It also raises the issues of ensuring computation is done correctly and minimizing the amount of information leaked to service providers. To address these challenges, we introduce S-FaaS, the first architecture and implementation of FaaS to provide strong security and accountability guarantees backed by Intel SGX. To match the dynamic event-driven nature of FaaS, our design introduces a new key distribution enclave and a novel transitive attestation protocol. A core contribution of S-FaaS is our set of resource measurement mechanisms that securely measure compute time inside an enclave, and actual memory allocations. We have integrated S-FaaS into the popular OpenWhisk FaaS framework. We evaluate the security of our architecture, the accuracy of our resource measurement mechanisms, and the performance of our implementation, showing that our resource measurement mechanisms add less than 6.3% latency on standardized benchmarks

    Non-functional requirements: size measurement and testing with COSMIC-FFP

    Get PDF
    The non-functional requirements (NFRs) of software systems are well known to add a degree of uncertainty to process of estimating the cost of any project. This paper contributes to the achievement of more precise project size measurement through incorporating NFRs into the functional size quantification process. We report on an initial solution proposed to deal with the problem of quantitatively assessing the NFR modeling process early in the project, and of generating test cases for NFR verification purposes. The NFR framework has been chosen for the integration of NFRs into the requirements modeling process and for their quantitative assessment. Our proposal is based on the functional size measurement method, COSMIC-FFP, adopted in 2003 as the ISO/IEC 19761 standard. Also in this paper, we extend the use of COSMIC-FFP for NFR testing purposes. This is an essential step for improving NFR development and testing effort estimates, and consequently for managing the scope of NFRs. We discuss the merits of the proposed approach and the open questions related to its design
    corecore