276,325 research outputs found

    Advances in knowledge-based software engineering

    Get PDF
    The underlying hypothesis of this work is that a rigorous and comprehensive software reuse methodology can bring about a more effective and efficient utilization of constrained resources in the development of large-scale software systems by both government and industry. It is also believed that correct use of this type of software engineering methodology can significantly contribute to the higher levels of reliability that will be required of future operational systems. An overview and discussion of current research in the development and application of two systems that support a rigorous reuse paradigm are presented: the Knowledge-Based Software Engineering Environment (KBSEE) and the Knowledge Acquisition fo the Preservation of Tradeoffs and Underlying Rationales (KAPTUR) systems. Emphasis is on a presentation of operational scenarios which highlight the major functional capabilities of the two systems

    Sequencing reliability growth tasks using multiattribute utility functions

    Get PDF
    In both hardware and software engineering, the reliability of systems improve over the Test, Analyse and Fix (TAAF) cycle as reliability tasks are performed and faults are designed out of the system. There are many possible tasks which could be carried out and a large numbers of sequences of these tasks possible. In this paper we consider the sequencing problem, taking into account the fact that the testing will be stopped once a reliability target is reached. We solve the problem by maximising the expectation of a two attribute utility function over cost and time on test. All marginal utilities are set to be risk averse. A reliability growth model based on the underlying engineering process is used. The method is illustrated with an example grounded in work with the aerospace industry

    APPLICATION AND REFINEMENTS OF THE REPS THEORY FOR SAFETY CRITICAL SOFTWARE

    Get PDF
    With the replacement of old analog control systems with software-based digital control systems, there is an urgent need for developing a method to quantitatively and accurately assess the reliability of safety critical software systems. This research focuses on proposing a systematic software metric-based reliability prediction method. The method starts with the measurement of a metric. Measurement results are then either directly linked to software defects through inspections and peer reviews or indirectly linked to software defects through empirical software engineering models. Three types of defect characteristics can be obtained, namely, 1) the number of defects remaining, 2) the number and the exact location of the defects found, and 3) the number and the exact location of defects found in an earlier version. Three models, Musa's exponential model, the PIE model and a mixed Musa-PIE model, are then used to link each of the three categories of defect characteristics with reliability respectively. In addition, the use of the PIE model requires mapping defects identified to an Extended Finite State Machine (EFSM) model. A procedure that can assist in the construction of the EFSM model and increase its repeatability is also provided. This metric-based software reliability prediction method is then applied to a safety-critical software used in the nuclear industry using eleven software metrics. Reliability prediction results are compared with the real reliability assessed by using operational failure data. Experiences and lessons learned from the application are discussed. Based on the results and findings, four software metrics are recommended. This dissertation then focuses on one of the four recommended metrics, Test Coverage. A reliability prediction model based on Test Coverage is discussed in detail and this model is further refined to be able to take into consideration more realistic conditions, such as imperfect debugging and the use of multiple testing phases

    A model for developing dependable system using component-based software development approach / Hasan Kahtan Khalaf Al-Ani

    Get PDF
    Component-based software development (CBSD) is an emerging technology that focuses on building systems by integrating existing software components. The software industry has adopted CBSD to rapidly build and deploy large and complex software systems with enormous savings despite minimal engineering effort, cost, and time. CBSD provides several benefits, such as improved ability to reuse existing codes, reduced development costs of high-quality systems, and shorter development time. However, CBSD encounter issues in terms of security trust mainly in dependability attributes. A system is considered dependable when it can be depended on to produce the consequences for which it was designed, with no adverse effect in its intended environment. Dependability comprises several attributes that imply availability, confidentiality, integrity, reliability, safety, and maintainability. Embedding dependability attributes in CBSD is essential for developing dependable component software

    Who Ate My Memory? Towards Attribution in Memory Management

    Get PDF
    To understand applications' memory usage details, engineers use instrumented builds and profiling tools. Both approaches are impractical for use in production environments or deployed mobile applications. As a result, developers can gather only high-level memory-related statistics for deployed software. In our experience, the lack of granular field data makes fixing performance and reliability-related defects complex and time-consuming. The software industry needs lightweight solutions to collect detailed data about applications' memory usage to increase developer productivity. Current research into memory attribution-related data structures, techniques, and tools is in the early stages and enables several new research avenues.Comment: 3 pages. To be published in the 45th International Conference on Software Engineering (ICSE 2023), May 14 - May 20 2023, Melbourne, Australi

    Data-driven engineering design research: Opportunities using open data

    Get PDF
    Engineering Design research relies on quantitative and qualitative data to describe design-related phenomena and prescribe improvements for design practice. Given data availability, privacy requirements and other constraints, most empirical data used in Engineering Design research can bedescribed as “closed”. Keeping such data closed is in many cases necessary and justifiable. However, this closedness also hinders replicability, and thus, may limit our possibilities to test the validity and reliability of research results in the field. This paper discusses implications and applications of using the already available and continuously growing body of open data sources to create opportunities for research in Engineering Design. Insights are illustrated by an examination of two examples: a study of open source software repositories and an analysis of open business registries in the cleantech industry. We conclude with a discussion about the limitations, challenges and risks of using open data in Engineering Design research and practice

    Large scale continuous integration and delivery:Making great software better and faster

    Get PDF
    Since the inception of continuous integration, and later continuous delivery, the methods of producing software in the industry have changed dramatically over the last two decades. Automated, rapid and frequent compilation, integration, testing, analysis, packaging and delivery of new software versions have become commonplace. This change has had significant impact not only on software engineering practice, but on the way we as consumers and indeed as a society relate to software. Moreover, as we live in an increasingly software-intensive and software-dependent world, the quality and reliability of the systems we use to build, test and deliver that software is a crucial concern. At the same time, it is repeatedly shown that the successful and effective implementation of continuous engineering practices is far from trivial, particularly in a large scale context. This thesis approaches the software engineering practices of continuous integration and delivery from multiple points of view, and is split into three parts, accordingly. Part I focuses on understanding the nature of continuous integration and differences in its interpretation and implementation. In order to address this divergence and provide practitioners and researchers alike with better and less ambiguous methods for describing and designing continuous integration and delivery systems, Part II applies the paradigm of system modeling to continuous integration and delivery. Meanwhile, Part III addresses the problem of traceability. Unique challenges to traceability in the context of continuous practices are highlighted, and possible solutions are presented and evaluated

    A Systematic Mapping Study on Requirements Engineering in Software Ecosystems

    Full text link
    Software ecosystems (SECOs) and open innovation processes have been claimed as a way forward for the software industry. A proper understanding of requirements is as important for these IT-systems as for more traditional ones. This paper presents a mapping study on the issues of requirements engineering and quality aspects in SECOs and analyzes emerging ideas. Our findings indicate that among the various phases or subtasks of requirements engineering, most of the SECO specific research has been accomplished on elicitation, analysis, and modeling. On the other hand, requirements selection, prioritization, verification, and traceability has attracted few published studies. Among the various quality attributes, most of the SECOs research has been performed on security, performance and testability. On the other hand, reliability, safety, maintainability, transparency, usability attracted few published studies. The paper provides a review of the academic literature about SECO-related requirements engineering activities, modeling approaches, and quality attributes, positions the source publications in a taxonomy of issues and identifies gaps where there has been little research.Comment: Journal of Information Technology Research (JITR) 11(1

    Sea Level Requirements as Systems Engineering Size Metrics

    Get PDF
    The Constructive Systems Engineering Cost Model (COSYSMO) represents a collaborative effort between industry, government, and academia to develop a general model to estimate systems engineering effort. The model development process has benefited from a diverse group of stakeholders that have contributed their domain expertise and historical project data for the purpose of developing an industry calibration. But the use of multiple stakeholders having diverse perspectives has introduced challenges for the developers of COSYSMO. Among these challenges is ensuring that people have a consistent interpretation of the model’s inputs. A consistent understanding of the inputs enables maximum benefits for its users and contributes to the model’s predictive accuracy. The main premise of this paper is that the reliability of these inputs can be significantly improved with the aide of a sizing framework similar to one developed for writing software use cases. The focus of this paper is the first of four COSYSMO size drivers, # of Systems Requirements, for which counting rules are provided. In addition, two different experiments that used requirements as metrics are compared to illustrate the benefits introduced by counting rules
    • …
    corecore