316 research outputs found

    Efficiency and Automation in Threat Analysis of Software Systems

    Get PDF
    Context: Security is a growing concern in many organizations. Industries developing software systems plan for security early-on to minimize expensive code refactorings after deployment. In the design phase, teams of experts routinely analyze the system architecture and design to find potential security threats and flaws. After the system is implemented, the source code is often inspected to determine its compliance with the intended functionalities. Objective: The goal of this thesis is to improve on the performance of security design analysis techniques (in the design and implementation phases) and support practitioners with automation and tool support.Method: We conducted empirical studies for building an in-depth understanding of existing threat analysis techniques (Systematic Literature Review, controlled experiments). We also conducted empirical case studies with industrial participants to validate our attempt at improving the performance of one technique. Further, we validated our proposal for automating the inspection of security design flaws by organizing workshops with participants (under controlled conditions) and subsequent performance analysis. Finally, we relied on a series of experimental evaluations for assessing the quality of the proposed approach for automating security compliance checks. Findings: We found that the eSTRIDE approach can help focus the analysis and produce twice as many high-priority threats in the same time frame. We also found that reasoning about security in an automated fashion requires extending the existing notations with more precise security information. In a formal setting, minimal model extensions for doing so include security contracts for system nodes handling sensitive information. The formally-based analysis can to some extent provide completeness guarantees. For a graph-based detection of flaws, minimal required model extensions include data types and security solutions. In such a setting, the automated analysis can help in reducing the number of overlooked security flaws. Finally, we suggested to define a correspondence mapping between the design model elements and implemented constructs. We found that such a mapping is a key enabler for automatically checking the security compliance of the implemented system with the intended design. The key for achieving this is two-fold. First, a heuristics-based search is paramount to limit the manual effort that is required to define the mapping. Second, it is important to analyze implemented data flows and compare them to the data flows stipulated by the design

    Software quality attribute measurement and analysis based on class diagram metrics

    Get PDF
    Software quality measurement lies at the heart of the quality engineering process. Quality measurement for object-oriented artifacts has become the key for ensuring high quality software. Both researchers and practitioners are interested in measuring software product quality for improvement. It has recently become more important to consider the quality of products at the early phases, especially at the design level to ensure that the coding and testing would be conducted more quickly and accurately. The research work on measuring quality at the design level progressed in a number of steps. The first step was to discover the correct set of metrics to measure design elements at the design level. Chidamber and Kemerer (C&K) formulated the first suite of OO metrics. Other researchers extended on this suite and provided additional metrics. The next step was to collect these metrics by using software tools. A number of tools were developed to measure the different suites of metrics; some represent their measurements in the form of ordinary numbers, others represent them in 3D visual form. In recent years, researchers developed software quality models which went a bit further by computing quality attributes from collected design metrics. In this research we extended on the software quality modelers’ work by adding a quality attribute prioritization scheme and a design metric analysis layer. Our work is all focused on the class diagram, the most fundamental constituent in any object oriented design. Using earlier researchers’ work, we extract a class diagram’s metrics and compute its quality attributes. We then analyze the results and inform the user. We present our figures and observations in the form of an analysis report. Our target user could be a project manager or a software quality engineer or a developer who needs to improve the class diagram’s quality. We closely examine the design metrics that affect quality attributes. We pinpoint the weaknesses in the class diagram, based on these metrics, inform the user about the problems that emerged from these classes, and advice him/her as to how he/she can go about improving the overall design quality. We consider the six basic quality attributes: “Reusability”, “Functionality”, “Understandability”, “Flexibility”, “Extendibility”, and “Effectiveness” of the whole class diagram. We allow the user to set priorities on these quality attributes in a sequential manner based on his/her requirements. Using a geometric series, we calculate a weighted average value for the arranged list of quality attributes. This weighted average value indicates the overall quality of the product, the class diagram. Our experimental work gave us much insight into the meanings and dependencies between design metrics and quality attributes. This helped us refine our analysis technique and give more concrete observations to the user

    Model based test suite minimization using metaheuristics

    Get PDF
    Software testing is one of the most widely used methods for quality assurance and fault detection purposes. However, it is one of the most expensive, tedious and time consuming activities in software development life cycle. Code-based and specification-based testing has been going on for almost four decades. Model-based testing (MBT) is a relatively new approach to software testing where the software models as opposed to other artifacts (i.e. source code) are used as primary source of test cases. Models are simplified representation of a software system and are cheaper to execute than the original or deployed system. The main objective of the research presented in this thesis is the development of a framework for improving the efficiency and effectiveness of test suites generated from UML models. It focuses on three activities: transformation of Activity Diagram (AD) model into Colored Petri Net (CPN) model, generation and evaluation of AD based test suite and optimization of AD based test suite. Unified Modeling Language (UML) is a de facto standard for software system analysis and design. UML models can be categorized into structural and behavioral models. AD is a behavioral type of UML model and since major revision in UML version 2.x it has a new Petri Nets like semantics. It has wide application scope including embedded, workflow and web-service systems. For this reason this thesis concentrates on AD models. Informal semantics of UML generally and AD specially is a major challenge in the development of UML based verification and validation tools. One solution to this challenge is transforming a UML model into an executable formal model. In the thesis, a three step transformation methodology is proposed for resolving ambiguities in an AD model and then transforming it into a CPN representation which is a well known formal language with extensive tool support. Test case generation is one of the most critical and labor intensive activities in testing processes. The flow oriented semantic of AD suits modeling both sequential and concurrent systems. The thesis presented a novel technique to generate test cases from AD using a stochastic algorithm. In order to determine if the generated test suite is adequate, two test suite adequacy analysis techniques based on structural coverage and mutation have been proposed. In terms of structural coverage, two separate coverage criteria are also proposed to evaluate the adequacy of the test suite from both perspectives, sequential and concurrent. Mutation analysis is a fault-based technique to determine if the test suite is adequate for detecting particular types of faults. Four categories of mutation operators are defined to seed specific faults into the mutant model. Another focus of thesis is to improve the test suite efficiency without compromising its effectiveness. One way of achieving this is identifying and removing the redundant test cases. It has been shown that the test suite minimization by removing redundant test cases is a combinatorial optimization problem. An evolutionary computation based test suite minimization technique is developed to address the test suite minimization problem and its performance is empirically compared with other well known heuristic algorithms. Additionally, statistical analysis is performed to characterize the fitness landscape of test suite minimization problems. The proposed test suite minimization solution is extended to include multi-objective minimization. As the redundancy is contextual, different criteria and their combination can significantly change the solution test suite. Therefore, the last part of the thesis describes an investigation into multi-objective test suite minimization and optimization algorithms. The proposed framework is demonstrated and evaluated using prototype tools and case study models. Empirical results have shown that the techniques developed within the framework are effective in model based test suite generation and optimizatio

    A Mono- and Multi-objective Approach for Recommending Software Refactoring

    Get PDF
    Les systèmes logiciels sont devenus de plus en plus répondus et importants dans notre société. Ainsi, il y a un besoin constant de logiciels de haute qualité. Pour améliorer la qualité de logiciels, l’une des techniques les plus utilisées est le refactoring qui sert à améliorer la structure d'un programme tout en préservant son comportement externe. Le refactoring promet, s'il est appliqué convenablement, à améliorer la compréhensibilité, la maintenabilité et l'extensibilité du logiciel tout en améliorant la productivité des programmeurs. En général, le refactoring pourra s’appliquer au niveau de spécification, conception ou code. Cette thèse porte sur l'automatisation de processus de recommandation de refactoring, au niveau code, s’appliquant en deux étapes principales: 1) la détection des fragments de code qui devraient être améliorés (e.g., les défauts de conception), et 2) l'identification des solutions de refactoring à appliquer. Pour la première étape, nous traduisons des régularités qui peuvent être trouvés dans des exemples de défauts de conception. Nous utilisons un algorithme génétique pour générer automatiquement des règles de détection à partir des exemples de défauts. Pour la deuxième étape, nous introduisons une approche se basant sur une recherche heuristique. Le processus consiste à trouver la séquence optimale d'opérations de refactoring permettant d'améliorer la qualité du logiciel en minimisant le nombre de défauts tout en priorisant les instances les plus critiques. De plus, nous explorons d'autres objectifs à optimiser: le nombre de changements requis pour appliquer la solution de refactoring, la préservation de la sémantique, et la consistance avec l’historique de changements. Ainsi, réduire le nombre de changements permets de garder autant que possible avec la conception initiale. La préservation de la sémantique assure que le programme restructuré est sémantiquement cohérent. De plus, nous utilisons l'historique de changement pour suggérer de nouveaux refactorings dans des contextes similaires. En outre, nous introduisons une approche multi-objective pour améliorer les attributs de qualité du logiciel (la flexibilité, la maintenabilité, etc.), fixer les « mauvaises » pratiques de conception (défauts de conception), tout en introduisant les « bonnes » pratiques de conception (patrons de conception).Software systems have become prevalent and important in our society. There is a constant need for high-quality software. Hence, to improve software quality, one of the most-used techniques is the refactoring which improves design structure while preserving the external behavior. Refactoring has promised, if applied well, to improve software readability, maintainability and extendibility while increasing the speed at which programmers can write and maintain their code. In general, refactoring can be performed in various levels such as the requirement, design, or code level. In this thesis, we mainly focus on the source code level where automated refactoring recommendation can be performed through two main steps: 1) detection of code fragments that need to be improved/fixed (e.g., code-smells), and 2) identification of refactoring solutions to achieve this goal. For the code-smells identification step, we translate regularities that can be found in such code-smell examples into detection rules. To this end, we use genetic programming to automatically generate detection rules from examples of code-smells. For the refactoring identification step, a search-based approach is used. The process aims at finding the optimal sequence of refactoring operations that improve software quality by minimizing the number of detected code-smells while prioritizing the most critical ones. In addition, we explore other objectives to optimize using a multi-objective approach: the code changes needed to apply refactorings, semantics preservation, and the consistency with development change history. Hence, reducing code changes allows us to keep as much as possible the initial design. On the other hand, semantics preservation insures that the refactored program is semantically coherent, and that it models correctly the domain-semantics. Indeed, we use knowledge from historical code change to suggest new refactorings in similar contexts. Furthermore, we introduce a novel multi-objective approach to improve software quality attributes (i.e., flexibility, maintainability, etc.), fix “bad” design practices (i.e., code-smells) while promoting “good” design practices (i.e., design patterns)

    An Approach for Guiding Developers to Performance and Scalability Solutions

    Get PDF
    This thesis proposes an approach that enables developers who are novices in software performance engineering to solve software performance and scalability problems without the assistance of a software performance expert. The contribution of this thesis is the explicit consideration of the implementation level to recommend solutions for software performance and scalability problems. This includes a set of description languages for data representation and human computer interaction and a workflow

    Modeling of Security Measurement (Metrics) in an Information System

    Get PDF
    Security metrics and measurement is a sub-field of broader information security field. This field is not new but it got very least and sporadic attention as a result of which it is still in its early stages. The measurement and evaluation of security now became a long standing challenge to the research community. Much of the focus remained towards devising and the application of new and updated protection mechanisms. Measurements in general act as a driving force in decision making. As stated by Lord Kelvin “if you cannot measure it then you cannot improve it”. This principle is also applicable to security measurement of information systems. Even if the necessary and required protection mechanisms are in place still the level of security remains unknown, which limits the decision making capabilities to improve the security of a system. With the increasing reliance on these information systems in general and software systems in particular security measurement has become the most pressing requirement in order to promote and develop the security critical systems in the current networked environment. The resultant indicators of security measurement preferably the quantative indicators act as a basis for the decision making to enhance the security of overall system. The information systems are comprised of various components such as people, hardware, data, network and software. With the fast growing reliance on the software systems, the research reported in this thesis aims to provide a framework using mathematical modeling techniques for evaluation of security of the software systems at the architectural and design phase of the system lifecycle and the derived security metrics on a controlled scale from the proposed framework. The proposed security evaluation framework is independent of the programing language and the platform used in developing the system and also is applicable from small desktop application to large complex distributed software. The validation process of security metrics is the most challenging part of the security metrics field. In this thesis we have conducted the exploratory empirical evaluation on a running system to validate the derived security metrics and the measurement results. To make the task easy we have transformed the proposed security evaluation into algorithmic form which increased the applicability of the proposed framework without requiring any expert security knowledge. The motivation of the research is to provide the software development team with a tool to evaluate the level of security of each of the element of the system and the overall system at the early development stages of the system life cycle. In this regard three question “What is to be measured?”, “where (in the system life cycle) to measure?” and “how to measure?” have been answered in the thesis. Since the field of security metrics and measurements is still in the its early stages, the first part of the thesis investigates and analyzes the basic terminologies , taxonomies and major efforts made towards security metrics based on the literature survey. Answering the second question “Where (in the system life cycle) to measure security”, the second part of the thesis analyzes the secure software development processes (SSDPs) followed and identifies the key stages of the system’s life cycle where the evaluation of security is necessary. Answering the question 1 and 2, “What is to be measured “and “How to measure”, third part of the thesis presents a security evaluation framework aimed at the software architecture and design phase using mathematical modeling techniques. In the proposed framework, the component based architecture and design (CBAD) using UML 2.0 component modeling techniques has been adopted. Further in part 3 of the thesis present the empirical evaluation of the proposed framework to validate and analyze the applicability and feasibility of the proposed security metrics. Our effort is to get the focus of the software development community to focus on the security evaluation in the software development process in order to take the early decisions regarding the security of the overall system

    Refining Transformation Rules For Converting UML Operations To Z Schema

    Get PDF
    The UML (Unified Modeling Language) has its origin in mainstream software engineering and is often used informally by software designers. One of the limitations of UML is the lack of precision in its semantics, which makes its application to safety critical systems unsuitable. A safety critical system is one in which any loss or misinterpretation of data could lead to injury, loss of human lives and/or property. Safety Critical systems are usually specified by very precisely and frequently required formal verification. With the continuous use of UML in the software industry, there is a need to augment the informality of software models produced to remove ambiguity and inconsistency in models for verification and validation. To overcome this well-known limitation of UML, formal specification techniques (FSTs), which are mathematically tractable, are often used to represent these models. Formal methods are mathematical techniques that allow software developers to produce softwares that address issues of ambiguity and error in complex and safety critical systems. By building a mathematically rigorous model of a complex system, it is possible to verify the system\u27s properties in a more thorough fashion than empirical testing. In this research, the author refines transformation rules for aspects of an informally defined design in UML to one that is verifiable, i.e. a formal specification notation. The specification language that is used is the Z Notation. The rules are applied to UML class diagram operation signatures iteratively, to derive Z schema representation of the operation signatures. Z representation may then be analyzed to detect flaws and determine where there is need to be more precise in defining the operation signatures. This work is an extension of previous research that lack sufficient detail for it to be taken to the next phase, towards the implementation of a tool for semi-automated transformation
    corecore