132,799 research outputs found

    Fatigue reliability of ship structures

    Get PDF
    Today we are sitting on a huge wealth of structural reliability theory but its application in ship design and construction is far behind. Researchers and practitioners face a daunting task of dove-tailing the theoretical achievements into the established processes in the industry. The research is aimed to create a computational framework to facilitate fatigue reliability of ship structures. Modeling, transformation and optimization, the three key elements underlying the success of computational mechanics are adopted as the basic methodology through the research. The whole work is presented in a way that is most suitable for software development. The foundation of the framework is constituted of reliability methods at component level. Looking at the second-moment reliability theory from a minimum distance point of view the author derives a generic set of formulations that incorporate all major first and second order reliability methods (FORM, SORM). Practical ways to treat correlation and non- Gaussian variables are discussed in detail. Monte Carlo simulation (MCS) also accounts for significant part of the research with emphasis on variance reduction techniques in a proposed Markov chain kernel method. Existing response surface methods (RSM) are reviewed and improved with much weight given to sampling techniques and determination of the quadratic form. Time-variant problem is touched upon and methods to convert it to nested reliability problems are discussed. In the upper layer of the framework common fatigue damage models are compared. Random process simulation and rain-flow counting are used to study effect of wide-banded non-Gaussian process. At the center of this layer is spectral fatigue analysis based on SN curve and first-principle stress and hydrodynamic analysis. Pseudo-excitation is introduced to get linear equivalent stress RAO in the non-linear ship-wave system. Finally response surface method is applied to this model to calculate probability of failure and design sensitivity in the case studies of a double hull oil tanker and a bulk carrier

    Reliability demonstration of a multi-component Weibull system under zero-failure assumption.

    Get PDF
    This dissertation is focused on finding lower confidence limits for the reliability of systems consisting of Wei bull components when the reliability demonstration testing (RDT) is conducted with zero failures. The usual methods for the parameter estimation of the underlying reliability functions like maximum likelihood estimator (MLE) or mean squares estimator (MSE) cannot be applied if the test data contains no failures. For single items there exists a methodology to calculate the lower confidence limit (LCL) of reliability for a certain confidence level. But there is no comparable method for systems. This dissertation provides a literature review on specific topics within the wide area of reliability engineering. Based on this and additional research work, a first theorem for the LCL of system reliability of systems with Weibull components is formulated. It can be applied if testing is conducted with zero observed failures. This theorem is unique in that it allows for different Wei bull shape parameters for components in the system. The model can also be applied if each component has been exposed to different test durations. This can result from accelerated life testing (AL T) with test procedures that have different acceleration factors for the various failure modes or components respectively. A second theorem for Ex -lifetime, derived from the first theorem, has been formulated as well. The first theorem on LCL of system reliability is firstly proven for systems with two components only. In the following the proof is extended towards the general case of n components. There is no limitation on the number of components n. The proof of the second theorem on Bx - lifetime is based on the first proof and utilizes the relation between Bx and reliability. The proven theorem is integrated into a model to analyze the sensitivity of the estimation of the Wei bull shape parameter p. This model is also applicable if the Weibull parameter is subject to either total uncertainty or of uncertainty within a defined range. The proven theorems can be utilized as the core of various models to optimize RDT plans in a way that the targets for the validation can be achieved most efficiently. The optimization can be conducted with respect to reliability, Bx -lifetime or validation cost. The respective optimization models are mixed-integer and highly non-linear and therefore very difficult to solve. Within this research work the software package LINGOâ„¢ was utilized to solve the models. There is a proposal included of how to implement the optimization models for RDT testing into the reliability process in order to iteratively optimize the RDT program based on failures occurred or changing boundary conditions and premises. The dissertation closes with the presentation of a methodology for the consideration of information about the customer usage for certain segments such as market share, annual mileage or component specific stress level for each segment. This methodology can be combined with the optimization models for RDT plans

    A controlled experiment for the empirical evaluation of safety analysis techniques for safety-critical software

    Get PDF
    Context: Today's safety critical systems are increasingly reliant on software. Software becomes responsible for most of the critical functions of systems. Many different safety analysis techniques have been developed to identify hazards of systems. FTA and FMEA are most commonly used by safety analysts. Recently, STPA has been proposed with the goal to better cope with complex systems including software. Objective: This research aimed at comparing quantitatively these three safety analysis techniques with regard to their effectiveness, applicability, understandability, ease of use and efficiency in identifying software safety requirements at the system level. Method: We conducted a controlled experiment with 21 master and bachelor students applying these three techniques to three safety-critical systems: train door control, anti-lock braking and traffic collision and avoidance. Results: The results showed that there is no statistically significant difference between these techniques in terms of applicability, understandability and ease of use, but a significant difference in terms of effectiveness and efficiency is obtained. Conclusion: We conclude that STPA seems to be an effective method to identify software safety requirements at the system level. In particular, STPA addresses more different software safety requirements than the traditional techniques FTA and FMEA, but STPA needs more time to carry out by safety analysts with little or no prior experience.Comment: 10 pages, 1 figure in Proceedings of the 19th International Conference on Evaluation and Assessment in Software Engineering (EASE '15). ACM, 201

    Assessing the reliability of adaptive power system protection schemes

    Get PDF
    Adaptive power system protection can be used to improve the performance of existing protection schemes under certain network conditions. However, their deployment in the field is impeded by their perceived inferior reliability compared to existing protection arrangements. Moreover, their validation can be problematic due to the perceived high likelihood of the occurrence of failure modes or incorrect setting selection with variable network conditions. Reliability (including risk assessment) is one of the decisive measures that can be used in the process of verifying adaptive protection scheme performance. This paper proposes a generic methodology for assessing the reliability of adaptive protection. The method involves the identification of initiating events and scenarios that lead to protection failures and quantification of the probability of the occurrence of each failure. A numerical example of the methodology for an adaptive distance protection scheme is provided

    Introducing the STAMP method in road tunnel safety assessment

    Get PDF
    After the tremendous accidents in European road tunnels over the past decade, many risk assessment methods have been proposed worldwide, most of them based on Quantitative Risk Assessment (QRA). Although QRAs are helpful to address physical aspects and facilities of tunnels, current approaches in the road tunnel field have limitations to model organizational aspects, software behavior and the adaptation of the tunnel system over time. This paper reviews the aforementioned limitations and highlights the need to enhance the safety assessment process of these critical infrastructures with a complementary approach that links the organizational factors to the operational and technical issues, analyze software behavior and models the dynamics of the tunnel system. To achieve this objective, this paper examines the scope for introducing a safety assessment method which is based on the systems thinking paradigm and draws upon the STAMP model. The method proposed is demonstrated through a case study of a tunnel ventilation system and the results show that it has the potential to identify scenarios that encompass both the technical system and the organizational structure. However, since the method does not provide quantitative estimations of risk, it is recommended to be used as a complementary approach to the traditional risk assessments rather than as an alternative. (C) 2012 Elsevier Ltd. All rights reserved

    Framework for continuous improvement of production processes

    Get PDF
    This research introduces a new approach of using Six Sigma DMAIC (Define, Measure, Analyse, Improve, Control) methodology. This approach integrates various tools and methods into a single framework, which consists of five steps. In the Define step, problems and main Key Performance Indicators (KPIs) are identified. In the Measure step, the modified Failure Classifier (FC), i.e. DOE-NE-STD-1004-92 is applied, which enables to specify the types of failures for each operation during the production process. Also, Failure Mode and Effect Analysis (FMEA) is used to measure the weight of failures by calculating the Risk Priority Number (RPN) value. In order to indicate the quality level of process/product the Process/Product Sigma Performance Level (PSPL) is calculated based on the FMEA results. Using the RPN values from FMEA the variability of process by failures, operations and work centres are observed. In addition, costs of the components are calculated, which enable to measure the impact of failures on the final product cost. A new method of analysis is introduced, in which various charts created in the Measure step are compared. Analysis step facilitates the subsequent Improve and Control steps, where appropriate changes in the manufacturing process are implemented and sustained. The objective of the new framework is to perform continuous improvement of production processes in the way that enables engineers to discover the critical problems that have financial impact on the final product. This framework provides new ways of monitoring and eliminating failures for production processes continuous improvement, by focusing on the KPIs important for business success. In this paper, the background and the key concepts of Six Sigma are described and the proposed Six Sigma DMAIC framework is explained. The implementation of this framework is verified by computational experiment followed by conclusion section
    • …
    corecore