316 research outputs found

    Arguing Security: A Framework for Analyzing Security Requirements

    Get PDF
    When considering the security of a system, the analyst must simultaneously work with two types of properties: those that can be shown to be true, and those that must be argued as being true. The first consists of properties that can be demonstrated conclusively, such as the type of encryption in use or the existence of an authentication scheme. The second consists of things that cannot be so demonstrated but must be considered true for a system to be secure, such as the trustworthiness of a public key infrastructure or the willingness of people to keep their passwords secure. The choices represented by the second case are called trust assumptions, and the analyst should supply arguments explaining why the trust assumptions are valid. This thesis presents three novel contributions: a framework for security requirements elicitation and analysis, based upon the construction of a context for the system; an explicit place and role for trust assumptions in security requirements; and structured satisfaction arguments to validate that a system can satisfy the security requirements. The system context is described using a problem-centered notation, then is validated against the security requirements through construction of a satisfaction argument. The satisfaction argument is in two parts: a formal argument that the system can meet its security requirements, and structured informal arguments supporting the assumptions exposed during argument construction. If one cannot construct a convincing argument, designers are asked to provide design information to resolve the problems and another pass is made through the framework to verify that the proposed solution satisfies the requirements. Alternatively, stakeholders are asked to modify the goals for the system so that the problems can be resolved or avoided. The contributions are evaluated by using the framework to do a security requirements analysis within an air traffic control technology evaluation project

    Engineering security into distributed systems: a survey of methodologies

    Get PDF
    Rapid technological advances in recent years have precipitated a general shift towards software distribution as a central computing paradigm. This has been accompanied by a corresponding increase in the dangers of security breaches, often causing security attributes to become an inhibiting factor for use and adoption. Despite the acknowledged importance of security, especially in the context of open and collaborative environments, there is a growing gap in the survey literature relating to systematic approaches (methodologies) for engineering secure distributed systems. In this paper, we attempt to fill the aforementioned gap by surveying and critically analyzing the state-of-the-art in security methodologies based on some form of abstract modeling (i.e. model-based methodologies) for, or applicable to, distributed systems. Our detailed reviews can be seen as a step towards increasing awareness and appreciation of a range of methodologies, allowing researchers and industry stakeholders to gain a comprehensive view of the field and make informed decisions. Following the comprehensive survey we propose a number of criteria reflecting the characteristics security methodologies should possess to be adopted in real-life industry scenarios, and evaluate each methodology accordingly. Our results highlight a number of areas for improvement, help to qualify adoption risks, and indicate future research directions.Anton V. Uzunov, Eduardo B. Fernandez, Katrina Falkne

    Cyber Infrastructure Protection: Vol. II

    Get PDF
    View the Executive SummaryIncreased reliance on the Internet and other networked systems raise the risks of cyber attacks that could harm our nation’s cyber infrastructure. The cyber infrastructure encompasses a number of sectors including: the nation’s mass transit and other transportation systems; banking and financial systems; factories; energy systems and the electric power grid; and telecommunications, which increasingly rely on a complex array of computer networks, including the public Internet. However, many of these systems and networks were not built and designed with security in mind. Therefore, our cyber infrastructure contains many holes, risks, and vulnerabilities that may enable an attacker to cause damage or disrupt cyber infrastructure operations. Threats to cyber infrastructure safety and security come from hackers, terrorists, criminal groups, and sophisticated organized crime groups; even nation-states and foreign intelligence services conduct cyber warfare. Cyber attackers can introduce new viruses, worms, and bots capable of defeating many of our efforts. Costs to the economy from these threats are huge and increasing. Government, business, and academia must therefore work together to understand the threat and develop various modes of fighting cyber attacks, and to establish and enhance a framework to assess the vulnerability of our cyber infrastructure and provide strategic policy directions for the protection of such an infrastructure. This book addresses such questions as: How serious is the cyber threat? What technical and policy-based approaches are best suited to securing telecommunications networks and information systems infrastructure security? What role will government and the private sector play in homeland defense against cyber attacks on critical civilian infrastructure, financial, and logistical systems? What legal impediments exist concerning efforts to defend the nation against cyber attacks, especially in preventive, preemptive, and retaliatory actions?https://press.armywarcollege.edu/monographs/1527/thumbnail.jp

    Towards a systematic security evaluation of the automotive Bluetooth interface

    Get PDF
    In-cabin connectivity and its enabling technologies have increased dramatically in recent years. Security was not considered an essential property, a mind-set that has shifted significantly due to the appearance of demonstrated vulnerabilities in these connected vehicles. Connectivity allows the possibility that an external attacker may compromise the security - and therefore the safety - of the vehicle. Many exploits have already been demonstrated in literature. One of the most pervasive connective technologies is Bluetooth, a short-range wireless communication technology. Security issues with this technology are well-documented, albeit in other domains. A threat intelligence study was carried out to substantiate this motivation and finds that while the general trend is towards increasing (relative) security in automotive Bluetooth implementations, there is still significant technological lag when compared to more traditional computing systems. The main contribution of this thesis is a framework for the systematic security evaluation of the automotive Bluetooth interface from a black-box perspective (as technical specifications were loose or absent). Tests were performed through both the vehicle’s native connection and through Bluetoothenabled aftermarket devices attached to the vehicle. This framework is supported through the use of attack trees and principles as outlined in the Penetration Testing Execution Standard. Furthermore, a proof-of-concept tool was developed to implement this framework in a semi-automated manner, to carry out testing on real-world vehicles. The tool also allows for severity classification of the results acquired, as outlined in the SAE J3061 Cybersecurity Guidebook for Cyber-Physical Vehicle Systems. Results of the severity classification are validated through domain expert review. Finally, how formal methods could be integrated into the framework and tool to improve confidence and rigour, and to demonstrate how future iterations of design could be improved is also explored. In conclusion, there is a need for systematic security testing, based on the findings of the threat intelligence study. The systematic evaluation and the developed tool successfully found weaknesses in both the automotive Bluetooth interface and in the vehicle itself through Bluetooth-enabled aftermarket devices. Furthermore, the results of applying this framework provide a focus for counter-measure development and could be used as evidence in a security assurance case. The systematic evaluation framework also allows for formal methods to be introduced for added rigour and confidence. Demonstrations of how this might be performed (with case studies) were presented. Future recommendations include using this framework with more test vehicles and expanding on the existing attack trees that form the heart of the evaluation. Further work on the tool chain would also be desirable. This would enable further accuracy of any testing or modelling required, and would also take automation of the entire process further

    Security-Driven Software Evolution Using A Model Driven Approach

    Get PDF
    High security level must be guaranteed in applications in order to mitigate risks during the deployment of information systems in open network environments. However, a significant number of legacy systems remain in use which poses security risks to the enterprise’ assets due to the poor technologies used and lack of security concerns when they were in design. Software reengineering is a way out to improve their security levels in a systematic way. Model driven is an approach in which model as defined by its type directs the execution of the process. The aim of this research is to explore how model driven approach can facilitate the software reengineering driven by security demand. The research in this thesis involves the following three phases. Firstly, legacy system understanding is performed using reverse engineering techniques. Task of this phase is to reverse engineer legacy system into UML models, partition the legacy system into subsystems with the help of model slicing technique and detect existing security mechanisms to determine whether or not the provided security in the legacy system satisfies the user’s security objectives. Secondly, security requirements are elicited using risk analysis method. It is the process of analysing key aspects of the legacy systems in terms of security. A new risk assessment method, taking consideration of asset, threat and vulnerability, is proposed and used to elicit the security requirements which will generate the detailed security requirements in the specific format to direct the subsequent security enhancement. Finally, security enhancement for the system is performed using the proposed ontology based security pattern approach. It is the stage that security patterns derived from security expertise and fulfilling the elicited security requirements are selected and integrated in the legacy system models with the help of the proposed security ontology. The proposed approach is evaluated by the selected case study. Based on the analysis, conclusions are drawn and future research is discussed at the end of this thesis. The results show this thesis contributes an effective, reusable and suitable evolution approach for software security

    Derivation and consistency checking of models in early software product line engineering

    Get PDF
    Dissertação para obtenção do Grau de Doutor em Engenharia InformáticaSoftware Product Line Engineering (SPLE) should offer the ability to express the derivation of product-specific assets, while checking for their consistency. The derivation of product-specific assets is possible using general-purpose programming languages in combination with techniques such as conditional compilation and code generation. On the other hand, consistency checking can be achieved through consistency rules in the form of architectural and design guidelines, programming conventions and well-formedness rules. Current approaches present four shortcomings: (1) focus on code derivation only, (2) ignore consistency problems between the variability model and other complementary specification models used in early SPLE, (3) force developers to learn new, difficult to master, languages to encode the derivation of assets, and (4) offer no tool support. This dissertation presents solutions that contribute to tackle these four shortcomings. These solutions are integrated in the approach Derivation and Consistency Checking of models in early SPLE (DCC4SPL) and its corresponding tool support. The two main components of our approach are the Variability Modelling Language for Requirements(VML4RE), a domain-specific language and derivation infrastructure, and the Variability Consistency Checker (VCC), a verification technique and tool. We validate DCC4SPL demonstrating that it is appropriate to find inconsistencies in early SPL model-based specifications and to specify the derivation of product-specific models.European Project AMPLE, contract IST-33710; Fundação para a Ciência e Tecnologia - SFRH/BD/46194/2008

    Experience requirements

    Get PDF
    Video game development is a high-risk effort with low probability of success. The interactive nature of the resulting artifact increases production complexity, often doing so in ways that are unexpected. New methodologies are needed to address issues in this domain. Video game development has two major phases: preproduction and production. During preproduction, the game designer and other members of the creative team create and capture a vision of the intended player experience in the game design document. The game design document tells the story and describes the game - it does not usually explicitly elaborate all of the details of the intended player experience, particularly with respect to how the player is intended to feel as the game progresses. Details of the intended experience tend to be communicated verbally, on an as-needed basis during iterations of the production effort. During production, the software and media development teams attempt to realize the preproduction vision in a game artifact. However, the game design document is not traditionally intended to capture production-ready requirements, particularly for software development. As a result, there is a communications chasm between preproduction and production efforts that can lead to production issues such as excessive reliance on direct communication with the game designer, difficulty scoping project elements, and difficulty in determining reasonably accurate effort estimates. We posit that defining and capturing the intended player experience in a manner that is influenced and informed by established requirements engineering principles and techniques will help cross the communications chasm between preproduction and production. The proposed experience requirements methodology is a novel contribution composed of: a model for the elements that compose experience requirements, a framework that provides guidance for expressing experience requirements, and an exemplary process for the elicitation, capture, and negotiation of experience requirements. Experience requirements capture the designer' s intent for the user experience; they represent user experience goals for the artifact and constraints upon the implementation and are not expected to be formal in the mathematical sense. Experience requirements are evolutionary in intent - they incrementally enhance and extend existing practices in a relatively lightweight manner using language and representations that are intended to be mutually acceptable to preproduction and to production

    Higher Order Mutation Testing

    Get PDF
    Mutation testing is a fault-based software testing technique that has been studied widely for over three decades. To date, work in this field has focused largely on first order mutants because it is believed that higher order mutation testing is too computationally expensive to be practical. This thesis argues that some higher order mutants are potentially better able to simulate real world faults and to reveal insights into programming bugs than the restricted class of first order mutants. This thesis proposes a higher order mutation testing paradigm which combines valuable higher order mutants and non-trivial first order mutants together for mutation testing. To overcome the exponential increase in the number of higher order mutants a search process that seeks fit mutants (both first and higher order) from the space of all possible mutants is proposed. A fault-based higher order mutant classification scheme is introduced. Based on different types of fault interactions, this approach classifies higher order mutants into four categories: expected, worsening, fault masking and fault shifting. A search-based approach is then proposed for locating subsuming and strongly subsuming higher order mutants. These mutants are a subset of fault mask and fault shift classes of higher order mutants that are more difficult to kill than their constituent first order mutants. Finally, a hybrid test data generation approach is introduced, which combines the dynamic symbolic execution and search based software testing approaches to generate strongly adequate test data to kill first and higher order mutants

    Computational Methods for Medical and Cyber Security

    Get PDF
    Over the past decade, computational methods, including machine learning (ML) and deep learning (DL), have been exponentially growing in their development of solutions in various domains, especially medicine, cybersecurity, finance, and education. While these applications of machine learning algorithms have been proven beneficial in various fields, many shortcomings have also been highlighted, such as the lack of benchmark datasets, the inability to learn from small datasets, the cost of architecture, adversarial attacks, and imbalanced datasets. On the other hand, new and emerging algorithms, such as deep learning, one-shot learning, continuous learning, and generative adversarial networks, have successfully solved various tasks in these fields. Therefore, applying these new methods to life-critical missions is crucial, as is measuring these less-traditional algorithms' success when used in these fields
    • …
    corecore