1,073 research outputs found

    Leveraging Traceability to Integrate Safety Analysis Artifacts into the Software Development Process

    Full text link
    Safety-critical system's failure or malfunction can cause loss of human lives or damage to the physical environment; therefore, continuous safety assessment is crucial for such systems. In many domains this includes the use of Safety assurance cases (SACs) as a structured argument that the system is safe for use. SACs can be challenging to maintain during system evolution due to the disconnect between the safety analysis and system development process. Further, safety analysts often lack domain knowledge and tool support to evaluate the SAC. We propose a solution that leverages software traceability to connect relevant system artifacts to safety analysis models, and then uses these connections to visualize the change. We elicit design rationales for system changes to help safety stakeholders analyze the impact of system changes on safety. We present new traceability techniques for closer integration of the safety analysis and system development process, and illustrate the viability of our approach using examples from a cyber-physical system that deploys Unmanned Aerial Vehicles for emergency response

    Evolution of security engineering artifacts: a state of the art survey

    Get PDF
    Security is an important quality aspect of modern open software systems. However, it is challenging to keep such systems secure because of evolution. Security evolution can only be managed adequately if it is considered for all artifacts throughout the software development lifecycle. This article provides state of the art on the evolution of security engineering artifacts. The article covers the state of the art on evolution of security requirements, security architectures, secure code, security tests, security models, and security risks as well as security monitoring. For each of these artifacts the authors give an overview of evolution and security aspects and discuss the state of the art on its security evolution in detail. Based on this comprehensive survey, they summarize key issues and discuss directions of future research

    Application of Software Engineering Principles to Synthetic Biology and Emerging Regulatory Concerns

    Get PDF
    As the science of synthetic biology matures, engineers have begun to deliver real-world applications which are the beginning of what could radically transform our lives. Recent progress indicates synthetic biology will produce transformative breakthroughs. Examples include: 1) synthesizing chemicals for medicines which are expensive and difficult to produce; 2) producing protein alternatives; 3) altering genomes to combat deadly diseases; 4) killing antibiotic-resistant pathogens; and 5) speeding up vaccine production. Although synthetic biology promises great benefits, many stakeholders have expressed concerns over safety and security risks from creating biological behavior never seen before in nature. As with any emerging technology, there is the risk of malicious use known as the dual-use problem. The technology is becoming democratized and de-skilled, and people in do-it-yourself communities can tinker with genetic code, similar to how programming has become prevalent through the ease of using macros in spreadsheets. While easy to program, it may be non-trivial to validate novel biological behavior. Nevertheless, we must be able to certify synthetically engineered organisms behave as expected, and be confident they will not harm natural life or the environment. Synthetic biology is an interdisciplinary engineering domain, and interdisciplinary problems require interdisciplinary solutions. Using an interdisciplinary approach, this dissertation lays foundations for verifying, validating, and certifying safety and security of synthetic biology applications through traditional software engineering concepts about safety, security, and reliability of systems. These techniques can help stakeholders navigate what is currently a confusing regulatory process. The contributions of this dissertation are: 1) creation of domain-specific patterns to help synthetic biologists develop assurance cases using evidence and arguments to validate safety and security of designs; 2) application of software product lines and feature models to the modular DNA parts of synthetic biology commonly known as BioBricks, making it easier to find safety features during design; 3) a technique for analyzing DNA sequence motifs to help characterize proteins as toxins or non-toxins; 4) a legal investigation regarding what makes regulating synthetic biology challenging; and 5) a repeatable workflow for leveraging safety and security artifacts to develop assurance cases for synthetic biology systems. Advisers: Myra B. Cohen and Brittany A. Dunca

    Proceedings of the 19th Annual Software Engineering Workshop

    Get PDF
    The Software Engineering Laboratory (SEL) is an organization sponsored by NASA/GSFC and created to investigate the effectiveness of software engineering technologies when applied to the development of applications software. The goals of the SEL are: (1) to understand the software development process in the GSFC environment; (2) to measure the effects of various methodologies, tools, and models on this process; and (3) to identify and then to apply successful development practices. The activities, findings, and recommendations of the SEL are recorded in the Software Engineering Laboratory Series, a continuing series of reports that include this document

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    Modellbasiertes Regressionstesten von Varianten und Variantenversionen

    Get PDF
    The quality assurance of software product lines (SPL) achieved via testing is a crucial and challenging activity of SPL engineering. In general, the application of single-software testing techniques for SPL testing is not practical as it leads to the individual testing of a potentially vast number of variants. Testing each variant in isolation further results in redundant testing processes by means of redundant test-case executions due to the shared commonality. Existing techniques for SPL testing cope with those challenges, e.g., by identifying samples of variants to be tested. However, each variant is still tested separately without taking the explicit knowledge about the shared commonality and variability into account to reduce the overall testing effort. Furthermore, due to the increasing longevity of software systems, their development has to face software evolution. Hence, quality assurance has also to be ensured after SPL evolution by testing respective versions of variants. In this thesis, we tackle the challenges of testing redundancy as well as evolution by proposing a framework for model-based regression testing of evolving SPLs. The framework facilitates efficient incremental testing of variants and versions of variants by exploiting the commonality and reuse potential of test artifacts and test results. Our contribution is divided into three parts. First, we propose a test-modeling formalism capturing the variability and version information of evolving SPLs in an integrated fashion. The formalism builds the basis for automatic derivation of reusable test cases and for the application of change impact analysis to guide retest test selection. Second, we introduce two techniques for incremental change impact analysis to identify (1) changing execution dependencies to be retested between subsequently tested variants and versions of variants, and (2) the impact of an evolution step to the variant set in terms of modified, new and unchanged versions of variants. Third, we define a coverage-driven retest test selection based on a new retest coverage criterion that incorporates the results of the change impact analysis. The retest test selection facilitates the reduction of redundantly executed test cases during incremental testing of variants and versions of variants. The framework is prototypically implemented and evaluated by means of three evolving SPLs showing that it achieves a reduction of the overall effort for testing evolving SPLs.Testen ist ein wichtiger Bestandteil der Entwicklung von Softwareproduktlinien (SPL). Aufgrund der potentiell sehr großen Anzahl an Varianten einer SPL ist deren individueller Test im Allgemeinen nicht praktikabel und resultiert zudem in redundanten Testfallausführungen, die durch die Gemeinsamkeiten zwischen Varianten entstehen. Existierende SPL-Testansätze adressieren diese Herausforderungen z.B. durch die Reduktion der Anzahl an zu testenden Varianten. Jedoch wird weiterhin jede Variante unabhängig getestet, ohne dabei das Wissen über Gemeinsamkeiten und Variabilität auszunutzen, um den Testaufwand zu reduzieren. Des Weiteren muss sich die SPL-Entwicklung mit der Evolution von Software auseinandersetzen. Dies birgt weitere Herausforderungen für das SPL-Testen, da nicht nur für Varianten sondern auch für ihre Versionen die Qualität sichergestellt werden muss. In dieser Arbeit stellen wir ein Framework für das modellbasierte Regressionstesten von evolvierenden SPL vor, das die Herausforderungen des redundanten Testens und der Software-Evolution adressiert. Das Framework vereint Testmodellierung, Änderungsauswirkungsanalyse und automatische Testfallselektion, um einen inkrementellen Testprozess zu definieren, der Varianten und Variantenversionen unter Ausnutzung des Wissens über gemeinsame Funktionalität und dem Wiederverwendungspotential von Testartefakten und -resultaten effizient testet. Für die Testmodellierung entwickeln wir einen Ansatz, der Variabilitäts- sowie Versionsinformation von evolvierenden SPL gleichermaßen für die Modellierung einbezieht. Für die Änderungsauswirkungsanalyse definieren wir zwei Techniken, um zum einen Änderungen in Ausführungsabhängigkeiten zwischen zu testenden Varianten und ihren Versionen zu identifizieren und zum anderen die Auswirkungen eines Evolutionsschrittes auf die Variantenmenge zu bestimmen und zu klassifizieren. Für die Testfallselektion schlagen wir ein Abdeckungskriterium vor, das die Resultate der Auswirkungsanalyse einbezieht, um automatisierte Entscheidungen über einen Wiederholungstest von wiederverwendbaren Testfällen durchzuführen. Die abdeckungsgetriebene Testfallselektion ermöglicht somit die Reduktion der redundanten Testfallausführungen während des inkrementellen Testens von Varianten und Variantenversionen. Das Framework ist prototypisch implementiert und anhand von drei evolvierenden SPL evaluiert. Die Resultate zeigen, dass eine Aufwandsreduktion für das Testen evolvierender SPL erreicht wird

    Self-adaptive fitness in evolutionary processes

    Get PDF
    Most optimization algorithms or methods in artificial intelligence can be regarded as evolutionary processes. They start from (basically) random guesses and produce increasingly better results with respect to a given target function, which is defined by the process's designer. The value of the achieved results is communicated to the evolutionary process via a fitness function that is usually somewhat correlated with the target function but does not need to be exactly the same. When the values of the fitness function change purely for reasons intrinsic to the evolutionary process, i.e., even though the externally motivated goals (as represented by the target function) remain constant, we call that phenomenon self-adaptive fitness. We trace the phenomenon of self-adaptive fitness back to emergent goals in artificial chemistry systems, for which we develop a new variant based on neural networks. We perform an in-depth analysis of diversity-aware evolutionary algorithms as a prime example of how to effectively integrate self-adaptive fitness into evolutionary processes. We sketch the concept of productive fitness as a new tool to reason about the intrinsic goals of evolution. We introduce the pattern of scenario co-evolution, which we apply to a reinforcement learning agent competing against an evolutionary algorithm to improve performance and generate hard test cases and which we also consider as a more general pattern for software engineering based on a solid formal framework. Multiple connections to related topics in natural computing, quantum computing and artificial intelligence are discovered and may shape future research in the combined fields.Die meisten Optimierungsalgorithmen und die meisten Verfahren in Bereich künstlicher Intelligenz können als evolutionäre Prozesse aufgefasst werden. Diese beginnen mit (prinzipiell) zufällig geratenen Lösungskandidaten und erzeugen dann immer weiter verbesserte Ergebnisse für gegebene Zielfunktion, die der Designer des gesamten Prozesses definiert hat. Der Wert der erreichten Ergebnisse wird dem evolutionären Prozess durch eine Fitnessfunktion mitgeteilt, die normalerweise in gewissem Rahmen mit der Zielfunktion korreliert ist, aber auch nicht notwendigerweise mit dieser identisch sein muss. Wenn die Werte der Fitnessfunktion sich allein aus für den evolutionären Prozess intrinsischen Gründen ändern, d.h. auch dann, wenn die extern motivierten Ziele (repräsentiert durch die Zielfunktion) konstant bleiben, nennen wir dieses Phänomen selbst-adaptive Fitness. Wir verfolgen das Phänomen der selbst-adaptiven Fitness zurück bis zu künstlichen Chemiesystemen (artificial chemistry systems), für die wir eine neue Variante auf Basis neuronaler Netze entwickeln. Wir führen eine tiefgreifende Analyse diversitätsbewusster evolutionärer Algorithmen durch, welche wir als Paradebeispiel für die effektive Integration von selbst-adaptiver Fitness in evolutionäre Prozesse betrachten. Wir skizzieren das Konzept der produktiven Fitness als ein neues Werkzeug zur Untersuchung von intrinsischen Zielen der Evolution. Wir führen das Muster der Szenarien-Ko-Evolution (scenario co-evolution) ein und wenden es auf einen Agenten an, der mittels verstärkendem Lernen (reinforcement learning) mit einem evolutionären Algorithmus darum wetteifert, seine Leistung zu erhöhen bzw. härtere Testszenarien zu finden. Wir erkennen dieses Muster auch in einem generelleren Kontext als formale Methode in der Softwareentwicklung. Wir entdecken mehrere Verbindungen der besprochenen Phänomene zu Forschungsgebieten wie natural computing, quantum computing oder künstlicher Intelligenz, welche die zukünftige Forschung in den kombinierten Forschungsgebieten prägen könnten

    Supporting the grow-and-prune model for evolving software product lines

    Get PDF
    207 p.Software Product Lines (SPLs) aim at supporting the development of a whole family of software products through a systematic reuse of shared assets. To this end, SPL development is separated into two interrelated processes: (1) domain engineering (DE), where the scope and variability of the system is defined and reusable core-assets are developed; and (2) application engineering (AE), where products are derived by selecting core assets and resolving variability. Evolution in SPLs is considered to be more challenging than in traditional systems, as both core-assets and products need to co-evolve. The so-called grow-and-prune model has proven great flexibility to incrementally evolve an SPL by letting the products grow, and later prune the product functionalities deemed useful by refactoring and merging them back to the reusable SPL core-asset base. This Thesis aims at supporting the grow-and-prune model as for initiating and enacting the pruning. Initiating the pruning requires SPL engineers to conduct customization analysis, i.e. analyzing how products have changed the core-assets. Customization analysis aims at identifying interesting product customizations to be ported to the core-asset base. However, existing tools do not fulfill engineers needs to conduct this practice. To address this issue, this Thesis elaborates on the SPL engineers' needs when conducting customization analysis, and proposes a data-warehouse approach to help SPL engineers on the analysis. Once the interesting customizations have been identified, the pruning needs to be enacted. This means that product code needs to be ported to the core-asset realm, while products are upgraded with newer functionalities and bug-fixes available in newer core-asset releases. Herein, synchronizing both parties through sync paths is required. However, the state of-the-art tools are not tailored to SPL sync paths, and this hinders synchronizing core-assets and products. To address this issue, this Thesis proposes to leverage existing Version Control Systems (i.e. git/Github) to provide sync operations as first-class construct

    The Last Decade in Review: Tracing the Evolution of Safety Assurance Cases through a Comprehensive Bibliometric Analysis

    Full text link
    Safety assurance is of paramount importance across various domains, including automotive, aerospace, and nuclear energy, where the reliability and acceptability of mission-critical systems are imperative. This assurance is effectively realized through the utilization of Safety Assurance Cases. The use of safety assurance cases allows for verifying the correctness of the created systems capabilities, preventing system failure. The latter may result in loss of life, severe injuries, large-scale environmental damage, property destruction, and major economic loss. Still, the emergence of complex technologies such as cyber-physical systems (CPSs), characterized by their heterogeneity, autonomy, machine learning capabilities, and the uncertainty of their operational environments poses significant challenges for safety assurance activities. Several papers have tried to propose solutions to tackle these challenges, but to the best of our knowledge, no secondary study investigates the trends, patterns, and relationships characterizing the safety case scientific literature. This makes it difficult to have a holistic view of the safety case landscape and to identify the most promising future research directions. In this paper, we, therefore, rely on state-of-the-art bibliometric tools(e.g., VosViewer) to conduct a bibliometric analysis that allows us to generate valuable insights, identify key authors and venues, and gain a birds eye view of the current state of research in the safety assurance area. By revealing knowledge gaps and highlighting potential avenues for future research, our analysis provides an essential foundation for researchers, corporate safety analysts, and regulators seeking to embrace or enhance safety practices that align with their specific needs and objectives
    corecore