415,774 research outputs found

    Reliability Estimation Model for Software Components Using CEP

    Get PDF
    This paper presents a graphical complexity measure based approach with an illustration for estimating the reliability of software component. This paper also elucidates how the graph-theory concepts are applied in the field of software programming. The control graphs of several actual software components are described and the correlation between intuitive complexity and the graph-theoretic complexity are illustrated. Several properties of the graph theoretic complexity are presented which shows that the software component complexity depends only on the decision structure. A symbolic reliability model for component based software systems from the execution path of software components connected in series, parallel or mixed configuration network structure is presented with a crisp narration of the factors which influence computation of the overall reliability of component based software systems. In this paper, reliability estimation model for software components using Component Execution Paths (CEP) based on graph theory is elucidated

    Uncertainty Theory Based Reliability-Centric Cyber-Physical System Design

    Get PDF
    Cyber-physical systems (CPSs) are built from, and depend upon, the seamless integration of software and hardware components. The most important challenge in CPS design and verification is to design CPS to be reliable in a variety of uncertainties, i.e., unanticipated and rapidly evolving environments and disturbances. The costs, delays and reliability of the designed CPS are highly dependent on software-hardware partitioning in the design. The key challenges in partitioning CPSs is that it is difficult to formalize reliability characterization in the same way as the uncertain cost and time delay. In this paper, we propose a new CPS design paradigm for reliability assurance while coping with uncertainty. To be specific, we develop an uncertain programming model for partitioning based on the uncertainty theory, to support the assured reliability. The uncertainty effect of the cost and delay time of components to be implemented can be modeled by the uncertainty variables with uncertainty distributions, and the reliability characterization is recursively derived. We convert the uncertain programming model and customize an improved heuristic to solve the converted model. Experiment results on some benchmarks and random graphs show that the uncertain method produces the design with higher reliability. Besides, in order to demonstrate the effectiveness of our model for in coping with uncertainty in design stage, we apply this uncertain framework and existing deterministic models in the design process of a sub-system that is used in real world subway control. The system implemented based on the uncertain model works better than the result of deterministic models. The proposed design paradigm has the potential to be generalized to the design of CPSs for greater assurances of safety and security under a variety of uncertainties

    Improving the Operational Reliability Model of the “Nikola Tesla-Block A” Thermal Power Plant System by Applying an Integrated Maintenance Model

    Get PDF
    The evaluation of the reliability status of complex technical systems is of great importance for their uninterrupted operation at full capacity and with a preventive maintenance plan in place. Limited research on the subject indicates that there is need to improve models of reliability simulation. The goal of the paper is to outline an improved operational reliability model of a thermal power plant using the power plant block “TENT A” as an example. The model is based on the failure interaction of the system components and is based on probability theory - the Weibull distribution, the Monte Carlo simulation and the established mathematical models of failure interaction of components using new software solutions. The results of the simulations show which direction the development of preventive activities should take in the case of failure interaction, which might lead to minimum downtime in power plant operations in the future

    Integrated Application of Active Controls (IAAC) technology to an advanced subsonic transport project: Current and advanced act control system definition study, volume 1

    Get PDF
    An active controls technology (ACT) system architecture was selected based on current technology system elements and optimal control theory was evaluated for use in analyzing and synthesizing ACT multiple control laws. The system selected employs three redundant computers to implement all of the ACT functions, four redundant smaller computers to implement the crucial pitch-augmented stability function, and a separate maintenance and display computer. The reliability objective of probability of crucial function failure of less than 1 x 10 to the -9th power per flight of 1 hr can be met with current technology system components, if the software is assumed fault free and coverage approaching 1.0 can be provided. The optimal control theory approach to ACT control law synthesis yielded comparable control law performance much more systematically and directly than the classical s-domain approach. The ACT control law performance, although somewhat degraded by the inclusion of representative nonlinearities, remained quite effective. Certain high-frequency gust-load alleviation functions may require increased surface rate capability

    Estimating the reliability of composite scores

    Get PDF
    In situations where multiple tests are administered (such as the GCSE subjects), scores from individual tests are frequently combined to produce a composite score. As part of the Ofqual reliability programme, this study, through a review of literature, attempts to: look at the different approaches that are employed to form composite scores from component or unit scores; investigate the implications of the use of the different approaches for the psychometric properties, particularly the reliability, of the composite scores; and identify procedures that are commonly used to estimate the reliability measure of composite scores

    Towards the Formal Reliability Analysis of Oil and Gas Pipelines

    Get PDF
    It is customary to assess the reliability of underground oil and gas pipelines in the presence of excessive loading and corrosion effects to ensure a leak-free transport of hazardous materials. The main idea behind this reliability analysis is to model the given pipeline system as a Reliability Block Diagram (RBD) of segments such that the reliability of an individual pipeline segment can be represented by a random variable. Traditionally, computer simulation is used to perform this reliability analysis but it provides approximate results and requires an enormous amount of CPU time for attaining reasonable estimates. Due to its approximate nature, simulation is not very suitable for analyzing safety-critical systems like oil and gas pipelines, where even minor analysis flaws may result in catastrophic consequences. As an accurate alternative, we propose to use a higher-order-logic theorem prover (HOL) for the reliability analysis of pipelines. As a first step towards this idea, this paper provides a higher-order-logic formalization of reliability and the series RBD using the HOL theorem prover. For illustration, we present the formal analysis of a simple pipeline that can be modeled as a series RBD of segments with exponentially distributed failure times.Comment: 15 page

    A compositional method for reliability analysis of workflows affected by multiple failure modes

    Get PDF
    We focus on reliability analysis for systems designed as workflow based compositions of components. Components are characterized by their failure profiles, which take into account possible multiple failure modes. A compositional calculus is provided to evaluate the failure profile of a composite system, given failure profiles of the components. The calculus is described as a syntax-driven procedure that synthesizes a workflows failure profile. The method is viewed as a design-time aid that can help software engineers reason about systems reliability in the early stage of development. A simple case study is presented to illustrate the proposed approach

    Quantitative Verification: Formal Guarantees for Timeliness, Reliability and Performance

    Get PDF
    Computerised systems appear in almost all aspects of our daily lives, often in safety-critical scenarios such as embedded control systems in cars and aircraft or medical devices such as pacemakers and sensors. We are thus increasingly reliant on these systems working correctly, despite often operating in unpredictable or unreliable environments. Designers of such devices need ways to guarantee that they will operate in a reliable and efficient manner. Quantitative verification is a technique for analysing quantitative aspects of a system's design, such as timeliness, reliability or performance. It applies formal methods, based on a rigorous analysis of a mathematical model of the system, to automatically prove certain precisely specified properties, e.g. ``the airbag will always deploy within 20 milliseconds after a crash'' or ``the probability of both sensors failing simultaneously is less than 0.001''. The ability to formally guarantee quantitative properties of this kind is beneficial across a wide range of application domains. For example, in safety-critical systems, it may be essential to establish credible bounds on the probability with which certain failures or combinations of failures can occur. In embedded control systems, it is often important to comply with strict constraints on timing or resources. More generally, being able to derive guarantees on precisely specified levels of performance or efficiency is a valuable tool in the design of, for example, wireless networking protocols, robotic systems or power management algorithms, to name but a few. This report gives a short introduction to quantitative verification, focusing in particular on a widely used technique called model checking, and its generalisation to the analysis of quantitative aspects of a system such as timing, probabilistic behaviour or resource usage. The intended audience is industrial designers and developers of systems such as those highlighted above who could benefit from the application of quantitative verification,but lack expertise in formal verification or modelling
    • 

    corecore