25 research outputs found

    Dynamic safety capability and management systems: An assessment tool to evaluate the “fitness-to-operate” in high-risk industrial environments

    Get PDF
    Aim: The paper outlines a systemic approach to understanding and assessing safety capability in high-risk industries, like off-shore oil, gas industry, chemical operators. The "Fitness to Operate" framework (acronym: FTO) (Griffin et al., 2014) has been recently defined by three enabling capitals that create safety capability: organizational capital, social capital, and human capital. Furthermore, each type of capital is identified by more specific dimensions based on current theories of safety, management, and organizational processes. In this paper, we will present a multidimensional assessment tool that offers a comprehensive picture of safety capability by real industrial operators in order to understand and evaluate their "fitness-to-operate" (FTO). Method: This current paper aims to describe the multi-phase development process of a FTO assessment tool in the format of a multidimensional survey questionnaire. A) The first research phase consists of the item generation of a large prototype pool with about 200 contents-items covering the 27 dimensions of the conceptual representation of the FTO framework. This initial pool was developed by a team of academic researchers, through a deductive process, and in the light of the original FTO conceptualization, as defined by Griffin and colleagues (2014) B) In a second research phase, the initial pools of items were re-examined by a new pool of academic researchers, assessing the quality of the contents, and in order to refine the extensive version of the prototype, eliminating potential redundancies and inadequate items. C) In a third phase with structured interviews to a pool of industrial experts (senior safety managers; senior executives), the authors assessed the quality of the prototype tool developed by the academic researchers, in order to evaluate and ranks the items of the prototype in term of quality, in order to define and identify a shorter version of the prototype. All the items were assessed by the experts considering criteria such as: i) relevance ii) clearness iii) verifiability iv) specificity v) ease of answer. Implications: Overall, the FTO assessment tool enables a comprehensive coverage of factors that influence short-term and long-term safety outcomes. The tool may serve to help safety regulators and industrial operators to understand, assess, and eventually implement and improve the safety capability and fitness-to-operate in complex industrial and organizational context

    A vibration cavitation sensitivity parameter based on spectral and statistical methods

    Get PDF
    Cavitation is one of the main problems reducing the longevity of centrifugal pumps in industry today. If the pump operation is unable to maintain operating conditions around the best efficiency point, it can be subject to conditions that may lead to vaporisation or flashing in the pipes upstream of the pump. The implosion of these vapour bubbles in the impeller or volute causes damaging effects to the pump. A new method of vibration cavitation detection is proposed in this paper, based on adaptive octave band analysis, principal component analysis and statistical metrics. Full scale industrial pump efficiency testing data was used to determine the initial cavitation parameters for the analysis. The method was then tested using vibration measured from a number of industry pumps used in the water industry. Results were compared to knowledge known about the state of the pump, and the classification of the pump according to ISO 10816

    Data-driven reliability analysis of Boeing 787 Dreamliner

    Get PDF
    The Boeing 787 Dreamliner, launched in 2011, was presented as a game changer in air travel. With the aim of producing an efficient, mid-size, wide-body plane, Boeing initiated innovations in product and process design, supply chain operation, and risk management. Nevertheless, there were reliability issues from the start, and the plane was grounded by the U.S. Federal Aviation Administration (FAA) in 2013, due to safety problems associated with Li-ion battery fires. This paper chronicles events associated with the aircraft's initial reliability challenges. The manufacturing, supply chain, and organizational factors that contributed to these problems are assessed based on FAA data. Recommendations and lessons learned are provided for the benefit of engineers and managers who will be engaged in future complex systems development

    The role of organizational factors in achieving reliability in the design and manufacture of subsea equipment

    No full text
    Failures of equipment used in deepwater oil and gas production are potentially hazardous, difficult and costly to rectify, and damaging to the environment; a high degree of reliability over many years of continuous operation is therefore an essential requirement of subsea systems. Although technical issues have been widely investigated, less is known about the organizational factors that promote high reliability in the design, manufacture, and installation of these systems. This review draws on studies of high-reliability manufacturing and process industries to examine the roles of intraorganizational factors (particularly organizational culture) that may promote or detract from the achievement of high reliability in subsea systems. External factors, such as supply chain coordination, are also considered. Studies of organizational change designed to enhance the reliability of design and manufacturing processes are rare in the subsea industry, but relevant issues arising from change initiatives in other organizational settings are discussed. Finally, several areas are identified in which systematic industry-based research could contribute to identifying critical elements in the development and operation of subsea systems and, hence, reduce the risk of failures. © 2011 Wiley Periodicals, Inc. © 2011 Wiley Periodicals, Inc., A Wiley Company

    Are you sure you want me to follow this? A study of procedure management, user perceptions and compliance behaviour

    No full text
    © 2017 Elsevier Ltd Adherence to procedures is critical to the safety and performance of maintenance tasks; however, few studies of procedure compliance among maintenance personnel have been reported. The present study evaluated a theoretical model in which management approaches to procedure compliance were linked to compliance outcomes through user perceptions of positive and negative procedure attributes. New scales were developed to assess these variables; hypotheses derived from the model were tested in survey data collected from maintainers in the mining industry (N = 176). A structural equation model showed acceptable fit statistics; findings were broadly consistent with the initial hypotheses. As predicted, positive and negative dimensions of procedure attributes and compliance/non-compliance were perceived as distinct constructs, and were implicated in different pathways of the model. Also supporting the initial hypotheses, user involvement and managers’ learning-oriented responses to non-compliance were linked to favourable compliance outcomes through perceived procedure attributes. Learning-oriented responses were also directly associated with greater compliance. In addition, and contrary to prediction, punitive management responses positively predicted compliance. As discussed in the paper, these findings contribute new insights, relevant in both research and industry contexts, to understanding procedure compliance among maintainers

    Data-driven approach for labelling process plant event data

    No full text
    An essential requirement in any data analysis is to have a response variable representing the aim of the analysis. Much academic work is based on laboratory or simulated data, where the experiment is controlled, and the ground truth clearly defined. This is seldom the reality for equipment performance in an industrial environment and it is common to find issues with the response variable in industry situations. We discuss this matter using a case study where the problem is to detect an asset event (failure) using data available but for which no ground truth is available from historical records. Our data frame contains measurements of 14 sensors recorded every minute from a process control system and 4 current motors on the asset of interest over a three year period. In this situation the ``how to'' label the event of interest is of fundamental importance. Different labelling strategies will generate different models with direct impact on the in-service fault detection efficacy of the resulting model. We discuss a data-driven approach to label a binary response variable (fault/anomaly detection) and compare it to a rule-based approach. Labelling of the time series was performed using dynamic time warping followed by agglomerative hierarchical clustering to group events with similar event dynamics. Both data sets have significant imbalance with 1,200,000 non-event data but only 150 events in the rule-based data set and 64 events in the data-driven data set. We study the performance of the models based on these two different labelling strategies, treating each data set independently. We describe decisions made in window-size selection, managing imbalance, hyper-parameter tuning, training and test selection, and use two models, logistic regression and random forest for event detection. We estimate useful models for both data sets. By useful, we understand that we could detect events for the first four months in the test set. However as the months progressed the performance of both models deteriorated, with an increasing number of false positives, reflecting possible changes in dynamics of the system. This work raises questions such as ``what are we detecting?'' and ``is there a right way to label?'' and presents a data driven approach to support labelling of historical events in process plant data for event detection in the absence of ground truth data

    Asset planning performance measurement framework

    No full text
    The international asset management standard ISO 55001, introduced in early 2014, outlines the requirement for an effective Asset Management System. Asset Management practitioners are seeking guidance on implementing one of the key requirements of the standard: the “line of sight” between the Corporate, Asset Management objectives and its relevant performance measures. This alignment ensures regulatory compliance, improved communication, informed asset investment decisions, managed risks and increased operational effectiveness. This paper demonstrates that a ‘line of sight’ is achievable through the application of the Balanced Scorecard approach using the Asset Management function at the Water Corporation as an example. The approach is deployed across two phases: the development of Asset Management Objectives through a consultative Asset Strategy Mapping exercise; and the selection of a balanced set of performance measures that link to the Strategy Map. The result of this approach is the creation of the ‘Asset Planning Performance Measurement Framework’. This framework is tested using water utility data resulting in the realisation of a ‘line of sight’ between asset performance measures and corporate objectives.Structural EngineeringCivil Engineering and Geoscience

    Cavitation sensitivity parameter analysis for centrifugal pumps based on spectral methods

    No full text
    Cavitation is a major problem facing centrifugal pumps in industry today. Unable to constantly maintain operating conditions around the best efficiency point, centrifugal pumps are subject to conditions that may lead to vaporisation or flashing in the pipes upstream of the pump. The implosion of these vapour bubbles in the impeller or volute causes damaging effects to the pump. A new method of cavitation detection is proposed in this paper based on spectral methods. Data used to determine parameters were obtained under ideal conditions, while the method was tested using industry acquired data. Results were compared to knowledge known about the state of the pump, and the classification of the pump according to ISO 10816

    A single cavitation indicator based on statistical parameters for a centrifugal pump

    No full text
    Cavitation is one of the major problems associated with the operation of centrifugal pumps. Cavitation occurs when vapour bubbles that are formed due to a drop in pressure in the pipes upstream of the centrifugal pump implode under the added pressure within the volute of the pump. These implosions wear away the impeller, and sometimes the volute itself, which if left unchecked, would render the pump inoperable. Much research has been done in the detection of cavitation through: indicators in certain audible frequencies, drop in the net positive suction head, visual inspection using a transparent casing and a stroboscopic light, paint erosion inside the volute and on the impeller, changes in pressure within the flow or volute, and vibration within certain frequency ranges. Vibration detection is deemed as one of the more difficult methods due to other structural and environmental factors that may influence which frequencies may be present during the onset of cavitation. Vibration measurement however is most easily measured and deployable in an automated condition monitoring scenario.It is proposed that an increasing trend in a set of statistical parameters, rather than a firm threshold of a single parameter, would provide a robust indication for the onset of cavitation. Trends in these statistical parameters were obtained from data collected on a pump forced to cavitate under several different operating conditions. A single cavitation indicator is outlined utilizing these statistical parameters that can quantify the level of cavitation in a centrifugal pump
    corecore