2,626 research outputs found

    Supporting group maintenance through prognostics-enhanced dynamic dependability prediction

    Get PDF
    Condition-based maintenance strategies adapt maintenance planning through the integration of online condition monitoring of assets. The accuracy and cost-effectiveness of these strategies can be improved by integrating prognostics predictions and grouping maintenance actions respectively. In complex industrial systems, however, effective condition-based maintenance is intricate. Such systems are comprised of repairable assets which can fail in different ways, with various effects, and typically governed by dynamics which include time-dependent and conditional events. In this context, system reliability prediction is complex and effective maintenance planning is virtually impossible prior to system deployment and hard even in the case of condition-based maintenance. Addressing these issues, this paper presents an online system maintenance method that takes into account the system dynamics. The method employs an online predictive diagnosis algorithm to distinguish between critical and non-critical assets. A prognostics-updated method for predicting the system health is then employed to yield well-informed, more accurate, condition-based suggestions for the maintenance of critical assets and for the group-based reactive repair of non-critical assets. The cost-effectiveness of the approach is discussed in a case study from the power industry

    Warranty Data Analysis: A Review

    Get PDF
    Warranty claims and supplementary data contain useful information about product quality and reliability. Analysing such data can therefore be of benefit to manufacturers in identifying early warnings of abnormalities in their products, providing useful information about failure modes to aid design modification, estimating product reliability for deciding on warranty policy and forecasting future warranty claims needed for preparing fiscal plans. In the last two decades, considerable research has been conducted in warranty data analysis (WDA) from several different perspectives. This article attempts to summarise and review the research and developments in WDA with emphasis on models, methods and applications. It concludes with a brief discussion on current practices and possible future trends in WDA

    Intelligent Sensing for Robotic Re-Manufacturing in Aerospace - An Industry 4.0 Design Based Prototype

    Get PDF
    Emerging through an industry-academia collaboration between the University of Sheffield and VBC Instrument Engineering Ltd, a proposed robotic solution for remanufacturing of jet engine compressor blades is under ongoing development, producing the first tangible results for evaluation. Having successfully overcome concept adaptation, funding mechanisms, design processes, with research and development trials, the stage of concept optimization and end-user application has commenced. A variety of new challenges is emerging, with multiple parameters requiring control and intelligence. An interlinked collaboration between operational controllers, Quality Assurance (QA) and Quality Control (QC) systems, databases, safety and monitoring systems, is creating a complex network, transforming the traditional manual re-manufacturing method to an advanced intelligent modern smart-factory. Incorporating machine vision systems for characterization, inspection and fault detection, alongside advanced real-time sensor data acquisition for monitoring and evaluating the welding process, a huge amount of valuable industrial data is produced. Information regarding each individual blade is combined with data acquired from the system, embedding data analytics and the concept of ìInternet of Thingsî (IoT) into the aerospace re-manufacturing industry. The aim of this paper is to give a first insight into the challenges of the development of an Industry 4.0 prototype system and an evaluation of first results of the operational prototype

    Finding Benefits of Utilizing RFID Technology in Skanska Maskin AB

    Get PDF
    In this paper, we discuss benefits of RFID technology that Skanska Maskin AB at Linnarhult Gothenburg in Sweden has planed to utilize in the future. We used case study methodology in this research to find benefits of RFID at Skanska Maskin AB. We gathered qualitative data from the present system by using mediating tools such as observation and interviews. We compared present barcode system with proposed future RFID system to see the benefits and effects of the RFID technology. The store environment of Skanska Maskin AB is more complex and difficult to manage. We used Soft System Methodology (SSM) to analyze and identify both present barcode and future RFID systems in such a complex and messy situation. SSM uses “systems thinking” in learning and reflection to help understand the various perceptions come from minds of different people who involved in the problem situation. The SSM considers the importance of cultural, social and political attributes in the present system. We discussed the results of system comparison illustrating the benefit of future RFID system

    Development of an Enhanced Agility Assessment Model for Legacy Information System

    Get PDF
    Deciding the moment to end the lifecycle of an information system are often not exhaustively studied. It is essential for an organisation to know when to end the life cycle of their legacy information system when it is no longer able to perform and comply with the changes the organization desires. Prolonging the length of an information system lifecycle could lead to a reduction in software cost. Most of the various metrics presented in literatures on agility measurement, such as Cost, Time, Robustness and Scope of changes (CTRS) and Simplicity, Speed and Scope of changes (3S) and the researchers evaluation methods, e.g., Hierarchy Process (AHP) and Fuzzy Mathematics Analytic are qualitative and usually need to be evaluated by domain experts subjectively. This study therefore developed an enhanced agility assessment model to measure legacy information system quantitatively with the agility factors: Speed, Robustness and Complexity in an educational institution. The adoption of a quantitative metrics methodology will lead to an accurate measurement of the student information system. A stand-alone online assessment system based on agility factors and satisfying the maximum metrics benchmark requirements was used for the model implementation. The results were: Complexity of the largest module=96, Robustness=547.5 hours and Speed 0.5 minutes. The complexity of the module that exceeded 20 can be fixed by reducing the control constructs of the source code modules into submodules, with each not greater than 20. The results obtained indicated that the student information system was still agile. Thus, management should continue with the system.    &nbsp

    Development of an RFID-based traceability system : experiences and lessons learned from an aircraft engineering company

    Get PDF
    Author name used in this publication: E. W. T. NgaiAuthor name used in this publication: T. C. E. ChengAuthor name used in this publication: Kee-hung Lai2007-2008 > Academic research: refereed > Publication in refereed journalVersion of RecordPublishe

    A2THOS: Availability Analysis and Optimisation in SLAs

    Get PDF
    IT service availability is at the core of customer satisfaction and business success for today’s organisations. Many medium-large size organisations outsource part of their IT services to external providers, with Service Level Agreements describing the agreed availability of outsourced service components. Availability management of partially outsourced IT services is a non trivial task since classic approaches for calculating availability are not applicable, and IT managers can only rely on their expertise to fulfil it. This often leads to the adoption of non optimal solutions. In this paper we present A2THOS, a framework to calculate the availability of partially outsourced IT services in the presence of SLAs and to achieve a cost-optimal choice of availability levels for outsourced IT components while guaranteeing a target availability level for the service

    Improved dynamic dependability assessment through integration with prognostics

    Get PDF
    The use of average data for dependability assessments results in a outdated system-level dependability estimation which can lead to incorrect design decisions. With increasing availability of online data, there is room to improve traditional dependability assessment techniques. Namely, prognostics is an emerging field which provides asset-specific failure information which can be reused to improve the system level failure estimation. This paper presents a framework for prognostics-updated dynamic dependability assessment. The dynamic behaviour comes from runtime updated information, asset inter-dependencies, and time-dependent system behaviour. A case study from the power generation industry is analysed and results confirm the validity of the approach for improved near real-time unavailability estimations

    Nonparametric bootstrapping of the reliability function for multiple copies of a repairable item modeled by a birth process

    Get PDF
    Nonparametric bootstrap inference is developed for the reliability function estimated from censored, nonstationary failure time data for multiple copies of repairable items. We assume that each copy has a known, but not necessarily the same, observation period; and upon failure of one copy, design modifications are implemented for all copies operating at that time to prevent further failures arising from the same fault. This implies that, at any point in time, all operating copies will contain the same set of faults. Failures are modeled as a birth process because there is a reduction in the rate of occurrence at each failure. The data structure comprises a mix of deterministic and random censoring mechanisms corresponding to the known observation period of the copy, and the random censoring time of each fault. Hence, bootstrap confidence intervals and regions for the reliability function measure the length of time a fault can remain within the item until realization as failure in one of the copies. Explicit formulae derived for the re-sampling probabilities greatly reduce dependency on Monte-Carlo simulation. Investigations show a small bias arising in re-sampling that can be quantified and corrected. The variability generated by the re-sampling approach approximates the variability in the underlying birth process, and so supports appropriate inference. An illustrative example describes application to a problem, and discusses the validity of modeling assumptions within industrial practice
    corecore