15,642 research outputs found

    A Quality Systems Economic-Risk Design Theoretical Framework

    Get PDF
    Quality systems, including control charts theory and sampling plans, have become essential tools to develop business processes. Since 1928, research has been conducted in developing the economic-risk designs for specific types of control charts or sampling plans. However, there has been no theoretical or applied research attempts to combine these related theories into a synthesized theoretical framework of quality systems economic-risk design. This research proposes to develop a theoretical framework of quality systems economic-risk design from qualitative research synthesis of the economic-risk design of sampling plan models and control charts models. This theoretical framework will be useful in guiding future research into economic risk quality systems design theory and application

    Expert Elicitation for Reliable System Design

    Full text link
    This paper reviews the role of expert judgement to support reliability assessments within the systems engineering design process. Generic design processes are described to give the context and a discussion is given about the nature of the reliability assessments required in the different systems engineering phases. It is argued that, as far as meeting reliability requirements is concerned, the whole design process is more akin to a statistical control process than to a straightforward statistical problem of assessing an unknown distribution. This leads to features of the expert judgement problem in the design context which are substantially different from those seen, for example, in risk assessment. In particular, the role of experts in problem structuring and in developing failure mitigation options is much more prominent, and there is a need to take into account the reliability potential for future mitigation measures downstream in the system life cycle. An overview is given of the stakeholders typically involved in large scale systems engineering design projects, and this is used to argue the need for methods that expose potential judgemental biases in order to generate analyses that can be said to provide rational consensus about uncertainties. Finally, a number of key points are developed with the aim of moving toward a framework that provides a holistic method for tracking reliability assessment through the design process.Comment: This paper commented in: [arXiv:0708.0285], [arXiv:0708.0287], [arXiv:0708.0288]. Rejoinder in [arXiv:0708.0293]. Published at http://dx.doi.org/10.1214/088342306000000510 in the Statistical Science (http://www.imstat.org/sts/) by the Institute of Mathematical Statistics (http://www.imstat.org

    Practical reliability. Volume 3 - Testing

    Get PDF
    Application of testing to hardware program

    Reliability and Maintainability Sampling Procedures for Life Cycle Cost Evaluation

    Get PDF
    The intent of this thesis is to investigate, develop, and apply techniques to determine the reliability and maintainability of populations of items. These techniques are to be used in determining the total life-time operating costs of the populations so that those items with the lowest life-time costs can be bought. To do this, the author has explored current techniques for determining compliance to some minimum required Mean Time Between Failure (MTBF) in what is referred to as a Phase I testing. After the requirements of Phase I testing have been met, testing may be continued at the option of the contractor and confidence limits constructed about the Bid MTBF to determine compliance to it. Methods by which incentives or penalties may be rewarded or assessed to contractor as a result of the Phase II testing are included. The author next investigated techniques which can be used to determine the maintainability parameters and the accuracy of these parameters. Finally, since the reliability techniques explored were all based on the exponential distribution, techniques were included to prove if the failure rate was exponential. If not, discussions were incorporated on how to handle this situation. (85 pages

    Reliability demonstration of a multi-component Weibull system under zero-failure assumption.

    Get PDF
    This dissertation is focused on finding lower confidence limits for the reliability of systems consisting of Wei bull components when the reliability demonstration testing (RDT) is conducted with zero failures. The usual methods for the parameter estimation of the underlying reliability functions like maximum likelihood estimator (MLE) or mean squares estimator (MSE) cannot be applied if the test data contains no failures. For single items there exists a methodology to calculate the lower confidence limit (LCL) of reliability for a certain confidence level. But there is no comparable method for systems. This dissertation provides a literature review on specific topics within the wide area of reliability engineering. Based on this and additional research work, a first theorem for the LCL of system reliability of systems with Weibull components is formulated. It can be applied if testing is conducted with zero observed failures. This theorem is unique in that it allows for different Wei bull shape parameters for components in the system. The model can also be applied if each component has been exposed to different test durations. This can result from accelerated life testing (AL T) with test procedures that have different acceleration factors for the various failure modes or components respectively. A second theorem for Ex -lifetime, derived from the first theorem, has been formulated as well. The first theorem on LCL of system reliability is firstly proven for systems with two components only. In the following the proof is extended towards the general case of n components. There is no limitation on the number of components n. The proof of the second theorem on Bx - lifetime is based on the first proof and utilizes the relation between Bx and reliability. The proven theorem is integrated into a model to analyze the sensitivity of the estimation of the Wei bull shape parameter p. This model is also applicable if the Weibull parameter is subject to either total uncertainty or of uncertainty within a defined range. The proven theorems can be utilized as the core of various models to optimize RDT plans in a way that the targets for the validation can be achieved most efficiently. The optimization can be conducted with respect to reliability, Bx -lifetime or validation cost. The respective optimization models are mixed-integer and highly non-linear and therefore very difficult to solve. Within this research work the software package LINGO™ was utilized to solve the models. There is a proposal included of how to implement the optimization models for RDT testing into the reliability process in order to iteratively optimize the RDT program based on failures occurred or changing boundary conditions and premises. The dissertation closes with the presentation of a methodology for the consideration of information about the customer usage for certain segments such as market share, annual mileage or component specific stress level for each segment. This methodology can be combined with the optimization models for RDT plans

    Research reports: 1991 NASA/ASEE Summer Faculty Fellowship Program

    Get PDF
    The basic objectives of the programs, which are in the 28th year of operation nationally, are: (1) to further the professional knowledge of qualified engineering and science faculty members; (2) to stimulate an exchange of ideas between participants and NASA; (3) to enrich and refresh the research and teaching activities of the participants' institutions; and (4) to contribute to the research objectives of the NASA Centers. The faculty fellows spent 10 weeks at MSFC engaged in a research project compatible with their interests and background and worked in collaboration with a NASA/MSFC colleague. This is a compilation of their research reports for summer 1991

    Optimal test case selection for multi-component software system

    Get PDF
    The omnipresence of software has forced upon the industry to produce efficient software in a short time. These requirements can be met by code reusability and software testing. Code reusability is achieved by developing software as components/modules rather than a single block. Software coding teams are becoming large to satiate the need of massive requirements. Large teams could work easily if software is developed in a modular fashion. It would be pointless to have software that would crash often. Testing makes the software more reliable. Modularity and reliability is the need of the day. Testing is usually carried out using test cases that target a class of software faults or a specific module. Usage of different test cases has an idiosyncratic effect on the reliability of the software system. Proposed research develops a model to determine the optimal test case policy selection that considers a modular software system with specific test cases in a stipulated testing time. The proposed model, models the failure behavior of each component using a conditional NHPP (Non-homogeneous Poisson process) and the interactions of the components by a CTMC (continuous time Markov chain). The initial number of bugs and the bug detection rate are known distributions. Dynamic programming is used as a tool in determining the optimal test case policy. The complete model is simulated using Matlab. The Markov decision process is computationally intensive but the implementation of the algorithm is meticulously optimized to eliminate repeat calculations. This has saved roughly 25-40% in processing time for different variations of the problem

    An integrated model for asset reliability, risk and production efficiency management in subsea oil and gas operations

    Get PDF
    PhD ThesisThe global demand for energy has been predicted to rise by 56% between 2010 and 2040 due to industrialization and population growth. This continuous rise in energy demand has consequently prompted oil and gas firms to shift activities from onshore oil fields to tougher terrains such as shallow, deep, ultra-deep and arctic fields. Operations in these domains often require deployment of unconventional subsea assets and technology. Subsea assets when installed offshore are super-bombarded by marine elements and human factors which increase the risk of failure. Whilst many risk standards, asset integrity and reliability analysis models have been suggested by many previous researchers, there is a gap on the capability of predictive reliability models to simultaneously address the impact of corrosion inducing elements such as temperature, pressure, pH corrosion on material wear-out and failure. There is also a gap in the methodology for evaluation of capital expenditure, human factor risk elements and use of historical data to evaluate risk. This thesis aims to contribute original knowledge to help improve production assurance by developing an integrated model which addresses pump-pipe capital expenditure, asset risk and reliability in subsea systems. The key contributions of this research is the development of a practical model which links four sub-models on reliability analysis, asset capital cost, event risk severity analysis and subsea risk management implementation. Firstly, an accelerated reliability analysis model was developed by incorporating a corrosion covariate stress on Weibull model of OREDA data. This was applied on a subsea compression system to predict failure times. A second methodology was developed by enhancing Hubbert oil production forecast model, and using nodal analysis for asset capital cost analysis of a pump-pipe system and optimal selection of best option based on physical parameters such as pipeline diameter, power needs, pressure drop and velocity of fluid. Thirdly, a risk evaluation method based on the mathematical determinant of historical event magnitude, frequency and influencing factors was developed for estimating the severity of risk in a system. Finally, a survey is conducted on subsea engineers and the results along with the previous models were developed into an integrated assurance model for ensuring asset reliability and risk management in subsea operations. A guide is provided for subsea asset management with due consideration to both technical and operational perspectives. The operational requirements of a subsea system can be measured, analysed and improved using the mix of mathematical, computational, stochastic and logical frameworks recommended in this work

    A web-based tool to design and analyze single- and double-stage acceptance sampling plans

    Get PDF
    Acceptance sampling plans are used to determine whether production lots can be accepted or rejected. Existing tools only provide a limited functionality for the two-point design and the risk analysis of such plans. In this article, a web-based tool is presented to study single- and double-stage sampling plans. In contrast to existing solutions, the tool is an interactive applet that is freely available. Analytic properties are derived to support the development of search strategies for the design of double-stage sampling plans that are more efficient and accurate in comparison with existing routines. Several case studies are presented

    4th International Probabilistic Workshop: 12th-13th October 2006, Berlin, BAM (Federal Institute for Materials Research and Testing)

    Get PDF
    Die heutige Welt der Menschen wird durch große Dynamik geprägt. Eine Vielzahl verschiedener Prozesse entfaltet sich parallel und teilweise auf unsichtbare Weise miteinander verbunden. Nimmt man z.B. den Prozess der Globalisierung: Hier erleben wir ein exponentielles Wachstum der internationalen Verknüpfungen von der Ebene einzelner Menschen und bis zur Ebene der Kulturen. Solche Verknüpfungen führen uns zum Begriff der Komplexität. Diese wird oft als Produkt der Anzahl der Elemente eines Systems mal Umfang der Verknüpfungen im System verstanden. In anderen Worten, die Welt wird zunehmend komplexer, denn die Verknüpfungen nehmen zu. Komplexität wiederum ist ein Begriff für etwas unverstandenes, unkontrollierbares, etwas unbestimmtes. Genau wie bei einem Menschen: Aus einer Zelle wächst ein Mensch, dessen Verhalten wir im Detail nur schwer vorhersagen können. Immerhin besitzt sein Gehirn 1011 Elemente (Zellen). Wenn also diese dynamischen sozialen Prozesse zu höherer Komplexität führen, müssen wir auch mehr Unbestimmtheit erwarten. Es bleibt zu Hoffen, dass die Unbestimmtheit nicht existenzielle Grundlagen betrifft. Was die Komplexität der Technik angeht, so versucht man hier im Gegensatz zu den gesellschaftlichen Unsicherheiten die Unsicherheiten zu erfassen und gezielt mit ihnen umzugehen. Das gilt für alle Bereiche, ob nun Naturgefahrenmanagement, beim Bau und Betrieb von Kernkraftwerken, im Bauwesen oder in der Schifffahrt. Und so verschieden diese Fachgebiete auch scheinen mögen, die an diesem Symposium teilnehmen: Sie haben erkannt, das verantwortungsvoller Umgang mit Technik einer Berücksichtigung der Unbestimmtheit bedarf. Soweit sind wir in gesellschaftlichen Prozessen noch nicht. Wünschenswert wäre, dass in einigen Jahren nicht nur Bauingenieure, Maschinenbauer, Mathematiker oder Schiffsbauer an einem solchen Probabilistik- Symposium teilnehmen, sondern auch Soziologen, Politiker oder Manager... (aus dem Vorwort) --- HINWEIS: Das Volltextdokument besteht aus einzelnen Beiträgen mit separater Seitenzählung.PREFACE: The world today is shaped by high dynamics. Multitude of processes evolves parallel and partly connected invisible. For example, the globalisation is such a process. Here one can observe the exponential growing of connections form the level of single humans to the level of cultures. Such connections guide as to the term complexity. Complexity is often understood as product of the number of elements and the amount of connections in the system. In other words, the world is going more complex, if the connections increase. Complexity itself is a term for a system, which is not fully understood, which is partly uncontrollable and indeterminated: exactly as humans. Growing from a single cell, the humans will show latter a behaviour, which we can not predict in detail. After all, the human brain consists of 1011 elements (cells). If the social dynamical processes yield to more complexity, we have to accept more indetermination. Well, one has to hope, that such an indetermination does not affect the basic of human existence. If we look at the field of technology, we can detect, that here indetermination or uncertainty is often be dealt with explicitly. This is valid for natural risk management, for nuclear engineering, civil engineering or for the design of ships. And so different the fields are which contribute to this symposium for all is valid: People working in this field have realised, that a responsible usage of technology requires consideration of indetermination and uncertainty. This level is not yet reached in the social sciences. It is the wish of the organisers of this symposium, that not only civil engineers, mechanical engineers, mathematicians, ship builders take part in this symposium, but also sociologists, managers and even politicians. Therefore there is still a great opportunity to grow for this symposium. Indetermination does not have to be negative: it can also be seen as chance
    corecore