141,378 research outputs found

    Reliability prediction in model driven development

    Get PDF
    Evaluating the implications of an architecture design early in the software development lifecycle is important in order to reduce costs of development. Reliability is an important concern with regard to the correct delivery of software system service. Recently, the UML Profile for Modeling Quality of Service has defined a set of UML extensions to represent dependability concerns (including reliability) and other non-functional requirements in early stages of the software development lifecycle. Our research has shown that these extensions are not comprehensive enough to support reliability analysis for model-driven software engineering, because the description of reliability characteristics in this profile lacks support for certain dynamic aspects that are essential in modeling reliability. In this work, we define a profile for reliability analysis by extending the UML 2.0 specification to support reliability prediction based on scenario specifications. A UML model specified using the profile is translated to a labelled transition system (LTS), which is used for automated reliability prediction and identification of implied scenarios; the results of this analysis are then fed back to the UML model. The result is a comprehensive framework for addressing software reliability modeling, including analysis and evolution of reliability predictions. We exemplify our approach using the Boiler System used in previous work and demonstrate how reliability analysis results can be integrated into UML models

    Empirical assessment of architecture-based reliability of open-source software

    Get PDF
    A number of analytical models have been proposed earlier for quantifying software reliability. Some of these models estimate the failure behavior of the software using black-box testing, which treats the software as a monolithic whole. With the evolution of component based software development, the necessity to use white-box testing increased. A few architecture-based reliability models, which use white-box approach, were proposed earlier and they have been validated using several small case studies and proved to be correct. However, there is a dearth of large-scale empirical data used for reliability analysis. This thesis enriches the empirical knowledge in software reliability engineering. We use a real, large-scale case study, GCC compiler, for our experiments. To the best of out knowledge, this is the most comprehensive case study ever used for software reliability analysis. The software is instrumented with a profiler, to extract the execution profiles of the test cases. The execution profiles form the basis for building the operational profile of the system, which describes the software usage. The test case failures are traced back to the faults in the source code to analyze the failure behavior of the components. These results are used to estimate the reliability of the software, as well as the uncertainty in the reliability analysis using entropy

    Reliability model for component-based systems in cosmic (a case study)

    Get PDF
    Software component technology has a substantial impact on modern IT evolution. The benefits of this technology, such as reusability, complexity management, time and effort reduction, and increased productivity, have been key drivers of its adoption by industry. One of the main issues in building component-based systems is the reliability of the composed functionality of the assembled components. This paper proposes a reliability assessment model based on the architectural configuration of a component-based system and the reliability of the individual components, which is usage- or testing-independent. The goal of this research is to improve the reliability assessment process for large software component-based systems over time, and to compare alternative component-based system design solutions prior to implementation. The novelty of the proposed reliability assessment model lies in the evaluation of the component reliability from its behavior specifications, and of the system reliability from its topology; the reliability assessment is performed in the context of the implementation-independent ISO/IEC 19761:2003 International Standard on the COSMIC method chosen to provide the component\u27s behavior specifications. In essence, each component of the system is modeled by a discrete time Markov chain behavior based on its behavior specifications with extended-state machines. Then, a probabilistic analysis by means of Markov chains is performed to analyze any uncertainty in the component\u27s behavior. Our hypothesis states that the less uncertainty there is in the component\u27s behavior, the greater the reliability of the component. The system reliability assessment is derived from a typical component-based system architecture with composite reliability structures, which may include the composition of the serial reliability structures, the parallel reliability structures and the p-out-of-n reliability structures. The approach of assessing component-based system reliability in the COSMIC context is illustrated with the railroad crossing case study. © 2008 World Scientific Publishing Company

    Design Criteria to Architect Continuous Experimentation for Self-Driving Vehicles

    Full text link
    The software powering today's vehicles surpasses mechatronics as the dominating engineering challenge due to its fast evolving and innovative nature. In addition, the software and system architecture for upcoming vehicles with automated driving functionality is already processing ~750MB/s - corresponding to over 180 simultaneous 4K-video streams from popular video-on-demand services. Hence, self-driving cars will run so much software to resemble "small data centers on wheels" rather than just transportation vehicles. Continuous Integration, Deployment, and Experimentation have been successfully adopted for software-only products as enabling methodology for feedback-based software development. For example, a popular search engine conducts ~250 experiments each day to improve the software based on its users' behavior. This work investigates design criteria for the software architecture and the corresponding software development and deployment process for complex cyber-physical systems, with the goal of enabling Continuous Experimentation as a way to achieve continuous software evolution. Our research involved reviewing related literature on the topic to extract relevant design requirements. The study is concluded by describing the software development and deployment process and software architecture adopted by our self-driving vehicle laboratory, both based on the extracted criteria.Comment: Copyright 2017 IEEE. Paper submitted and accepted at the 2017 IEEE International Conference on Software Architecture. 8 pages, 2 figures. Published in IEEE Xplore Digital Library, URL: http://ieeexplore.ieee.org/abstract/document/7930218

    Cyber physical systems implementation for asset management improvement: A framework for the transition

    Get PDF
    Libro en Open AccessThe transformation of the industry due to recent technologies introduction is an evolving process whose engines are competitiveness and sustainability, understood in its broadest sense (environmental, economic and social). This process is facing, due to the current state of scientific and technological development, a new challenge yet even more important: the transition from discrete technological solutions that respond to isolated problems, to a global conception where the assets, plant, processes and engineering systems are conceived, designed and operated as an integrated complex unit. This vision is evolving besides a set of concepts that are, in some way, to guide this development: Smart Factories, Cyber-Physical Systems, Factory of the Future or Industry 4.0, are examples. The full integration of the operation and maintenance (O&M) processes in the production systems is a key topic within this new paradigm. Not only that, this evolution necessarily results in the emergence of new processes and needs of O&M, i.e. also, the O&M will undergo a profound transformation. The transition from actual isolated production assets to such Industry 4.0 with CPS is far from easy. This document presents a proposal to develop such transition adapting one iteration of the Model of Maintenance Management (MMM) integrated into ISO 55000 to the complexity of incorporating “System of Systems” CPSs maintenance. It involves several stages: identification, prioritization, risk management, planning, scheduling, execution, control, and improvement supported by system engineering techniques and agile/concurrent project managemen

    Deferred Action: Theoretical model of process architecture design for emergent business processes

    Get PDF
    E-Business modelling and ebusiness systems development assumes fixed company resources, structures, and business processes. Empirical and theoretical evidence suggests that company resources and structures are emergent rather than fixed. Planning business activity in emergent contexts requires flexible ebusiness models based on better management theories and models . This paper builds and proposes a theoretical model of ebusiness systems capable of catering for emergent factors that affect business processes. Drawing on development of theories of the ‘action and design’class the Theory of Deferred Action is invoked as the base theory for the theoretical model. A theoretical model of flexible process architecture is presented by identifying its core components and their relationships, and then illustrated with exemplar flexible process architectures capable of responding to emergent factors. Managerial implications of the model are considered and the model’s generic applicability is discussed
    corecore