9 research outputs found

    A generalized model for fuel channel bore estimation in AGR cores

    Get PDF
    One of the major life-limiting factors of an Advanced Gas-cooled Reactor (AGR) nuclear power station is the graphite core as it cannot be repaired or replaced and therefore detailed information about the health of the core is vital for continued safe operation. The graphite bricks that comprise the core experience gradual degradation during operation as a result of irradiation. Routine physical inspection of the graphite core fuel channels is performed by specialist inspection equipment during outages every 12 months to 3 years. It has also been shown to be advantageous to supplement this periodic inspection information with analysis of operational data which can provide additional insights into the core health. One such approach is through the use of online monitoring data called the Fuel Grab Load Trace (FGLT). An FGLT is a measure of the perceived load of the fuel assembly with contributions from aerodynamic forces and frictional forces, which is related to bore diameter. This paper describes enhancements to existing analysis of FGLT data which, to date, has focussed solely on using data from a single reactor at a time to build bore estimation models, by considering data from multiple reactors to produce a generalised model of bore estimation. This paper initially describes the process of producing a bore estimation from an FGLT by isolating the contribution that relates to the fuel channel bore and then discusses the limitations with the existing bore estimation model. Improvements are then proposed for the bore estimation model and a detailed assessment is undertaken to understand the effect of each of these proposed improvements. In addition, the effect of introducing non-linear regression models to further enhance the bore estimation is explored. The existing model is trained on data from one reactor in the UK and therefore the results produced from it are only applicable to this reactor. However, out of the remaining 13 nuclear reactors currently in operation, 3 also have a similar construction to the reactor the model is trained on, and these should all produce similar FGLT data. Therefore, a generalised model is proposed that produces bore estimations for four AGRs station’s reactors, compared with one previously. It is shown that this approach offers an improved overall bore estimation model

    A data analytic approach to automatic fault diagnosis and prognosis for distribution automation

    Get PDF
    Distribution Automation (DA) is deployed to reduce outages and to rapidly reconnect customers following network faults. Recent developments in DA equipment have enabled the logging of load and fault event data, referred to as ‘pick-up activity’. This pick-up activity provides a picture of the underlying circuit activity occurring between successive DA operations over a period of time and has the potential to be accessed remotely for off-line or on-line analysis. The application of data analytics and automated analysis of this data supports reactive fault management and post fault investigation into anomalous network behavior. It also supports predictive capabilities that identify when potential network faults are evolving and offers the opportunity to take action in advance in order to mitigate any outages. This paper details the design of a novel decision support system to achieve fault diagnosis and prognosis for DA schemes. It combines detailed data from a specific DA device with rule-based, data mining and clustering techniques to deliver the diagnostic and prognostic functions. These are applied to 11kV distribution network data captured from Pole Mounted Auto-Reclosers (PMARs) as provided by a leading UK network operator. This novel automated analysis system diagnoses the nature of a circuit’s previous fault activity, identifies underlying anomalous circuit activity, and highlights indications of problematic events gradually evolving into a full scale circuit fault. The novel contributions include the tackling of ‘semi-permanent faults’ and the re-usable methodology and approach for applying data analytics to any DA device data sets in order to provide diagnostic decisions and mitigate potential fault scenarios

    A Gaussian process based fleet lifetime predictor model for unmonitored power network assets

    Get PDF
    This paper proposes the use of Gaussian Process Regression to automatically identify relevant predictor variables in a formulation of a remaining useful life model for unmonitored, low value power network assets. Reclosers are used as a proxy for evaluating the efficacy of this method. Distribution network reclosers are typically high-volume assets without on-line monitoring, leading to an insufficient understanding of which factors drive their failures. The ubiquity of reclosers, and their lack of monitoring, prevents the tracking of their individual remaining life, and, confirms their use in validating the proposed process. As an alternative to monitoring, periodic inspection data is used to evaluate asset risk level, which is then used in a predictive model of remaining useful life. Inspection data is often variable in quality with a number of features missing from records. Accordingly, missing inputs are imputed by the proposed process using samples drawn from an advanced form of joint distribution learned from test records and reduced to its conditional form. This work is validated on operational data provided by a regional distribution network operator, but conceptually is applicable to unmonitored fleets of assets of any power network

    Improved explicability for pump diagnostics in nuclear power plants

    Get PDF
    To ensure the continued safe operation of many of the UK's fleet of advanced gas-cooled reactors (AGRs) effective and reliable monitoring of several of the key plant items is essential. Out of these key items, a significant portion of these are rotating plant assets, one asset in particular that is crucial to the operation of the station are the boiler feed pumps (BFPs). The BFPs in an AGR station move water from a condenser into a boiler, the water is then heated which produces steam and this steam turns the electricity-generating turbines. Currently, the operator of the AGR stations employs a time-based maintenance strategy for BFP assets: after a defined amount of time each asset is removed, replaced with a rotated spare, and a complete overhaul is then performed on the removed asset. This procedure can result in the removal of an asset before any significant wear has occurred, therefore increasing maintenance and generation costs. Conversely, this could result in an unplanned outage due to a component failure which leads to both a decrease in power output of the station and hence a decrease in revenue for the operator. Because these pumps are essential for the generation of electricity there are several pressure, temperature, vibration and speed parameters constantly monitored during the operation of this asset. Currently, data analysts have to manually analyse all this data by following a set diagnosis process, the consequential time burden on the analyst is therefore extremely high. Data-driven approaches to solve this problem, and other similar problems, have the capability to produce accurate results similar to what the analysts can achieve in a fraction of the time. However, the majority of these techniques are black box techniques and lack explicability which is often a requirement for problems involving critical assets in the nuclear industry. The main outcomes of this work are to address the time burden placed on the analysts by automating elements of the existing diagnosis process, through the implementation of an intelligent rule-based expert system, that provides adequate explicability to the user to satisfy requirements. Additionally, a recurring problem in the design of expert systems for industry is the cost involved with the knowledge elicitation process. Here we propose a questionnaire style approach, similar to what the domain experts currently use, to extract this knowledge without the need for a structured interview. By using this information a signal-to-symbol transformation algorithm is designed to assign time periods symbols that relate to the various rules defined by the domain experts. The final system combines the data-driven signal-to-symbol transformation algorithm and the rule-based expert system to produce a hybrid system that can be used to classify defects based on a set of rules and also explain to the user the reasoning behind this solution

    Determining appropriate data analytics for transformer health monitoring

    Get PDF
    Transformers are vital assets for the safe, reliable and cost-effective operation of nuclear power plants. The unexpected failure of a transformer can lead to different consequences ranging from a lack of export capability, with the corresponding economic penalties, to catastrophic failure, with the associated health, safety and economic effects. Condition monitoring techniques examine the health of the transformer periodically, with the aim to identify early indicators of anomalies. However, many transformer failures occur because diagnostic and monitoring models do not identify degraded conditions in time. Therefore, health monitoring is an essential component to transformer lifecycle management. Existing tools for transformer health monitoring use traditional dissolved gas analysis based diagnostics techniques. With the advance of prognostics and health management (PHM) applications, we can enhance traditional transformer health monitoring techniques using PHM analytics. The design of an appropriate data analytics system requires a multi-stage design process including: (i) specification of engineering requirements; (ii) characterization of existing data sources and analytics to identify complementary techniques; (iii) development of the functional specification of the analytics suite to formalize its behavior, and finally (iv) deployment, validation, and verification of the functional requirements in the final platform. Accordingly, in this paper we propose a transformer analytics suite which incorporates anomaly detection, diagnostics, and prognostics modules in order to complement existing tools for transformer health monitoring

    Combining models of behavior with operational data to provide enhanced condition monitoring of AGR cores

    Get PDF
    Installation of new monitoring equipment in Nuclear Power Plants (NPPs) is often difficult and expensive and therefore maximising the information that can be extracted from existing monitoring equipment is highly desirable. This paper describes the process of combining models derived from laboratory experimentation with current operational plant data to infer an underlying measure of health. A demonstration of this process is provided where the fuel channel bore profile, a measure of core health, is inferred from data gathered during the refueling process of an Advanced Gas-cooled Reactor (AGR) nuclear power plant core. Laboratory simulation was used to generate a model of an interaction between the fuel assembly and the core. This model is used to isolate a single frictional component from a noisy input signal and use this friction component as a measure of health to assess the current condition of the graphite bricks that comprise the core. In addition, the model is used to generate an expected refueling response (the noisy input signal) for a given set of channel bore diameter measurements for either insertion of new fuel or removal of spent fuel, providing validation of the model. This benefit of this work is that it provides a greater understanding of the health of the graphite core, which is important for continued and extended operation of the AGR plants in the UK

    Data Analytics to Support Operational Distribution Network Monitoring

    No full text
    The operation of distribution networks has become more challenging in recent years with increasing levels of embedded generation and other low carbon technologies pushing these towards their design limits. To identify the nature and extent of these challenges, network operators are deploying monitoring equipment on low voltage feeders, leading to new insights into fault behaviour and usage characterisation. With this heightened level of observability comes the additional challenge of finding models that translate raw data streams into outputs on which operational decisions can be based or supported. In this paper, operational low voltage substation and feeder monitoring data from a UK distribution network is used to identify fault occurrence relations to localised meteorological data, characterise the localised network sensitivities of demand dynamics and infer the effects of embedded generation not visible to the network operator. These case studies are then used to show how additional operational context can be provided to the network operator through the application of analytics

    Parameterisation of domain knowledge for rapid and iterative prototyping of knowledge-based systems

    Get PDF
    In critical infrastructure applications, timely and consistent fault detection and diagnosis is an increasingly important operational process, especially in the energy sector where safety is of the utmost importance. To realise this, engineers have to manually analyse data acquired from several assets using predefined diagnostic processes, but this is a time-consuming process requiring significant amounts of specialist expert knowledge. Data-driven approaches to support fault detection and diagnosis, and other similar problems, can produce accurate results comparable to what the engineers can achieve in a fraction of the time. However, the majority of these data-driven techniques are black box techniques and lack explainability which is often necessary for explaining decisions about critical assets in the power generation industry. Knowledge-based systems, such as rule-based expert systems have been shown to provide not only accurate decisions but also the explanation and reasoning behind these decisions in some related applications. However, there is a significant time cost associated with the development of knowledge-based systems, and in particular with the knowledge elicitation process, where the domain expert’s knowledge is formalised and is encoded into the system. This challenge is commonly referred to as the knowledge elicitation bottleneck. In this paper, we present a novel approach to performing the knowledge elicitation using a set of symbolic primitives (rise, fall, fluctuate, and stable) to parameterise typical time-series condition monitoring data. The knowledge is represented by using a common language that can easily be communicated with (and from) the domain experts. This allows for the quick and accurate elicitation of the domain experts knowledge, but also the formalisation and implementation of the knowledge into a rapidly produced diagnostic system. Further to this, due to the parametrisation of the knowledge, it is possible to iteratively improve the knowledge base by updating these parameters based on new unseen data. This approach was applied to the Tennessee Eastman dataset, which is simulated data of a real-world industrial process. It was found that by using this approach it was possible to accurately and quickly capture the knowledge required to detect several faults within the case study dataset, but also provided fully explained reasons why each fault was detected by relating the explanations to the symbolic primitives previously defined

    Managing remote online partial discharge data

    No full text
    The volume of data produced by existing partial discharge monitoring systems is often too large for engineers to examine in detail, leading to data being ignored and useful indicators of health being missed. The case study reported in this paper recorded 21 839 events around an HVDC reactor over a six-day period. We estimate that it takes 1 min to check whether an event requires detailed study, leading to over two man-months of effort to locate important events in a dataset of this size. Additionally, online monitoring data are stored onsite, and may require an engineer's visit for collection. This paper presents an approach to remote partial discharge monitoring, supported by automated data interpretation and prioritization, which enables engineers to remotely find and download important data. Results from the case study are used to illustrate these concept
    corecore