7,976 research outputs found

    Reliability applied to maintenance

    Get PDF
    The thesis covers studies conducted during 1976-79 under a Science Research Council contract to examine the uses of reliability information in decision-making in maintenance in the process industries. After a discussion of the ideal data system, four practical studies of process plants are described involving both Pareto and distribution analysis. In two of these studies the maintenance policy was changed and the effect on failure modes and frequency observed. Hyper-exponentially distributed failure intervals were found to be common and were explained after observation of maintenance work practices and development of theory as being due to poor workmanship and parts. The fallacy that constant failure rate necessarily implies the optimality of maintenance only at failure is discussed. Two models for the optimisation of inspection intervals are developed; both assume items give detectable warning of impending failure. The first is based upon constant risk of failure between successive inspections 'and Weibull base failure distribution~ Results show that an inspection/on-condition maintenance regime can be cost effective even when the failure rate is falling and may be better than periodiC renewals for an increasing failure situation. The second model is first-order Markov. Transition rate matrices are developed and solved to compare continuous monitoring with inspections/on-condition maintenance an a cost basis. The models incorporate planning delay in starting maintenance after impending failure is detected. The relationships between plant output and maintenance policy as affected by the presence of redundancy and/or storage between stages are examined, mainly through the literature but with some original theoretical proposals. It is concluded that reliability techniques have many applications in the improvement of plant maintenance policy. Techniques abound, but few firms are willing to take the step of faith to set up, even temporarily, the data-collection facilities required to apply them. There are over 350 references, many of which are reviewed in the text, divided into chapter-related sectionso Appendices include a review of Reliability Engineering Theory, based on the author's draft for BS 5760(2) a discussion of the 'bath-tub curves' applicability to maintained systems and the theory connecting hyper-exponentially distributed failures with poor maintenance practices

    Maintainability analysis of mining trucks with data analytics.

    Get PDF
    The mining industry is one of the biggest industries in need of a large budget, and current changes in global economic challenges force the industry to reduce its production expenses. One of the biggest expenditures is maintenance. Thanks to the data mining techniques, available historical records of machines’ alarms and signals might be used to predict machine failures. This is crucial because repairing machines after failures is not as efficient as utilizing predictive maintenance. In this case study, the reasons for failures seem to be related to the order of signals or alarms, called events, which come from trucks. The trucks ran twenty-four hours a day, seven days a week, and drivers worked twelve-hour shifts during a nine-month period. Sequential pattern mining was implemented as a data mining methodology to discover which failures might be connected to groups of events, and SQL was used for analyzing the data. According to results, there are several sequential patterns in alarms and signals before machine breakdowns occur. Furthermore, the results are shown differently depending on shifts’ sizes. Before breakdowns occur in the last five shifts a hundred percent detection rates are observed. However, in the last three shifts it is observed less than a hundred-percentage detection rate

    A cost model for managing producer and consumer risk in availability demonstration testing.

    Get PDF
    Evaluation and demonstration of system performance against specified requirements is an essential element of risk reduction during the design, development, and production phases of a product lifecycle. Typical demonstration testing focuses on reliability and maintainability without consideration for availability. A practical reason considers the fact that demonstration testing for availability cannot be performed until very late in the product lifecycle when production representative units become available and system integration is completed. At this point, the requirement to field the system often takes priority over demonstration of availability performance. Without proper validation testing, the system can be fielded with reduced mission readiness and increased lifecycle cost. The need exists for availability demonstration testing (ADT) with emphasis on managing risk while minimizing the cost to the user. Risk management must ensure a test strategy that adequately considers producer and consumer risk objectives. This research proposes a methodology for ADT that provides managers and decision makers an improved ability to distinguish between high and low availability systems. A new availability demonstration test methodology is defined that provides a useful strategy for the consumer to mitigate significant risk without sacrificing the cost of time to field a product or capability. A surface navy electronic system case study supports the practical implementation of this methodology using no more than a simple spreadsheet tool for numerical analysis. Development of this method required three significant components which add to the existing body of knowledge. The first was a comparative performance assessment of existing ADT strategies to understand if any preferences exist. The next component was the development of an approach for ADT design that effectively considers time constraints on the test duration. The third component was the development of a procedure for an ADT design which provides awareness of risk levels in time-constrained ADT, and offers an evaluation of alternatives to select the best sub-optimal test plan. Comparison of the different ADT strategies utilized a simulation model to evaluate runs specified by a five-factor, full-factorial design of experiments. Analysis of variance verified that ADT strategies are significantly different with respect to output responses quality of decision and timeliness. Analysis revealed that the fixed number of failure ADT strategy has the lowest deviation from estimated producer and consumer risk, the measure of quality. The sequential ADT strategy had an average error 3.5 times larger and fixed test time strategies displayed error rates 8.5 to 12.7 larger than the best. The fixed test time strategies had superior performance in timeliness, measured by average test duration. The sequential strategy took 24% longer on average, and the fixed number of failure strategy took 2.5 times longer on average than the best. The research evaluated the application of a time constraint on ADT, and determined an increase in producer and consumer risk levels results when test duration is limited from its optimal value. It also revealed that substitution of a specified time constraint formatted for a specific test strategy produced a pair of dependent relationships between risk levels and the critical test value. These relationships define alternative test plans and could be analyzed in a cost context to compare and select the low cost alternative test plan. This result led to the specification of a support tool to enable a decision maker to understand changes to a and ß resulting from constraint of test duration, and to make decisions based on the true risk exposure. The output of this process is a time-constrained test plan with known producer and consumer risk levels

    Product lifecycle optimization using dynamic degradation models

    Get PDF

    On condition-based maintenance for machine components

    Get PDF
    The goal of condition-based maintenance (CBM) is to base the decisions whether or not to perform maintenance on information collected from the machine or component of interest. A condition-based maintenance tool should be able to diagnose if the component of interest is in a state of failure but the ultimate goal of a CBM tool is to be able to estimate time until failure, either in terms of remaining useful life (RUL) or estimated time to failure (ETTF). Therefore a CBM tool should have both diagnostic and prognostic features. This master’s thesis was carried out at a company within the packaging industry and the goal was to implement a CBM tool with the possibility to estimate RUL for a set of critical components which could serve as a base for further development within the company. The selection of components to focus on was part of the thesis as well. The process of implementing CBM with prognostic functionality was more difficult than expected and the goal of estimating RUL was not met for any of the components, but the work that has been done forms a basis for further development. Thus, this thesis will serve as a pre-study on developing CBM and contains information of what is required in order to be successful

    Quantitative methods for data driven reliability optimization of engineered systems

    Get PDF
    Particle accelerators, such as the Large Hadron Collider at CERN, are among the largest and most complex engineered systems to date. Future generations of particle accelerators are expected to increase in size, complexity, and cost. Among the many obstacles, this introduces unprecedented reliability challenges and requires new reliability optimization approaches. With the increasing level of digitalization of technical infrastructures, the rate and granularity of operational data collection is rapidly growing. These data contain valuable information for system reliability optimization, which can be extracted and processed with data-science methods and algorithms. However, many existing data-driven reliability optimization methods fail to exploit these data, because they make too simplistic assumptions of the system behavior, do not consider organizational contexts for cost-effectiveness, and build on specific monitoring data, which are too expensive to record. To address these limitations in realistic scenarios, a tailored methodology based on CRISP-DM (CRoss-Industry Standard Process for Data Mining) is proposed to develop data-driven reliability optimization methods. For three realistic scenarios, the developed methods use the available operational data to learn interpretable or explainable failure models that allow to derive permanent and generally applicable reliability improvements: Firstly, novel explainable deep learning methods predict future alarms accurately from few logged alarm examples and support root-cause identification. Secondly, novel parametric reliability models allow to include expert knowledge for an improved quantification of failure behavior for a fleet of systems with heterogeneous operating conditions and derive optimal operational strategies for novel usage scenarios. Thirdly, Bayesian models trained on data from a range of comparable systems predict field reliability accurately and reveal non-technical factors' influence on reliability. An evaluation of the methods applied to the three scenarios confirms that the tailored CRISP-DM methodology advances the state-of-the-art in data-driven reliability optimization to overcome many existing limitations. However, the quality of the collected operational data remains crucial for the success of such approaches. Hence, adaptations of routine data collection procedures are suggested to enhance data quality and to increase the success rate of reliability optimization projects. With the developed methods and findings, future generations of particle accelerators can be constructed and operated cost-effectively, ensuring high levels of reliability despite growing system complexity

    Quality control and improvement of the aluminum alloy castings for the next generation of engine block cast components.

    Get PDF
    This research focuses on the quality control and improvement of the W319 aluminum alloy engine blocks produced at the NEMAK Windsor Aluminum Plant (WAP). The present WAP Quality Control (QC) system was critically evaluated using the cause and effect diagram and therefore, a novel Plant Wide Quality Control (PWQC) system is proposed. This new QC system presents novel tools for off line as well as on line quality control. The off line tool uses heating curve analysis for the grading of the ingot suppliers. The on line tool utilizes Tukey control charts of the Thermal Analysis (TA) parameters for statistical process control. An Artificial Neural Network (ANN) model has also been developed for the on-line prediction and control of the Silicon Modification Level (SiML). The student t-statistical analysis has shown that even small scale variations in the Fe and Mn levels significantly affect the shrink porosity level of the 3.0L V6 engine block bulkhead. When the Fe and Mn levels are closer to their upper specification limits (0.4 wt.% and 0.3wt.%, respectively), the probability of low bulkhead shrink porosity is as high as 0.73. Elevated levels of Sn (∌0.04 wt.%) and Pb (∌0.03 wt.%) were found to lower the Brinell Hardness (HB) of the V6 bulkhead after the Thermal Sand Removal (TSR) and Artificial Aging (AA) processes. Therefore, Sn and Pb levels must be kept below 0.0050 wt.% and 0.02 wt.%, respectively, to satisfy the bulkhead HB requirements. The Cosworth electromagnetic pump reliability studies have indicated that the life of the pump has increased from 19,505 castings to 43,904 castings (225% increase) after the implementation of preventive maintenance. The optimum preventive maintenance period of the pump was calculated to be 43,000 castings. The solution treatment parameters (temperature and time) of the Novel Solution Treatment during the Solidification (NSTS) Process were optimized using ANN and the Simulated Annealing (SA) algorithm. The optimal NSTS process (516°C and 66 minutes) would significantly reduce the present Thermal Sand Removal (TSR) time (4 hours) and would avoid the problem of incipient melting without sacrificing the mechanical properties. In order to improve the cast component characteristics and to lower the alloy price, a new alloy, Al 332, (Si=10.5 wt.% & Cu=2 wt.%) was developed by optimizing the Si and Cu levels of 3XX Al alloys as a replacement for the W319 alloy. The predicted as cast characteristics of the new alloy were found to satisfy the requirements of Ford engineering specification WSE-M2A-151-A2/A4.* *This dissertation is a compound document (contains both a paper copy and a CD as part of the dissertation).Dept. of Industrial and Manufacturing Systems Engineering. Paper copy at Leddy Library: Theses & Major Papers - Basement, West Bldg. / Call Number: Thesis2005 .F735. Source: Dissertation Abstracts International, Volume: 66-11, Section: B, page: 6201. Thesis (Ph.D.)--University of Windsor (Canada), 2005

    Development of a Life-Cycle Cost Analysis Tool for Improved Maintenance and Management of Bridges

    Get PDF
    The Moving Ahead for Progress in the 21st Century Act (MAP-21) of 2012 requires states to develop and implement a transportation asset management plan (TAMP) for their respective portions of the National Highway System (NHS). Life-cycle cost and risk management analyses must be included in a state’s TAMP. As defined in the 1998 Transportation Equity Act for the 21st Century (TEA-21), life-cycle cost analysis (LCCA) is “a process for evaluating the total economic worth of a usable project segment by analyzing initial costs and discounted future costs, such as maintenance, user costs, and reconstruction, rehabilitation, restoring, and resurfacing costs, over the life of the project segment.” The main objective of this research project was to develop a LCCA tool for Iowa’s bridges based on survival analysis of condition ratings. This tool was designed to cover the most common types of bridges in Iowa while integrating historical data from maintenance crews, contractors, and past inspections into the predictive models that account for the costs of maintenance and repair during a bridge’s service life. The tool developed in this project provides a user friendly way to evaluate and compare maintenance costs for bridge decks over the lifetime of a bridge. With this information, transportation investment decisions can be made in consideration of all of the maintenance costs incurred during the period over which the maintenance alternatives are being compared

    Subsea inspection and monitoring challenges

    Get PDF
    Master's thesis in Offshore technology : industrial asset managementThis paper uncovers and suggests solutions for the challenges to control change over time more reliable and cost effective. Front-end concept engineering, design, inspection and monitoring strategies, technologies, systems and methods for Life-of-Field are recommended. Autonomous underwater vehicles (AUV) are identified as a possible cost- efficient opportunity to reduce cost of inspections and monitoring operations while safeguarding asset integrity. A recognized design spiral methodology is used to perform a front-end concept evaluation of an AUV system. Investigation of key technological limitations and new developments within underwater communication, energy storage and wireless power transmission is performed. It further enables opportunities such as AUV recharging station on the seafloor for better utilization. One major learning point is through the use of numerical models and the outcome being a better and more hydro effective hull design. One expectation from this paper may be the aid to collaborating partners in their design work
    • 

    corecore