144 research outputs found
The Role of Futureproofing in the Management of Infrastructural Assets
Ensuring long-term value from infrastructure is essential for a sustainable economy. In this context, futureproofing
involves addressing two broad issues:
i. Ensuring the ability of infrastructure to be resilient to unexpected or uncontrollable events e.g. extreme weather
events; and
ii. Ensuring the ability to adapt to required changes in structure and / or operations of the infrastructure in the future
e.g. expansion of capacity, change in usage mode or volumes.
Increasingly, in their respective roles, infrastructure designers/builders and owners/operators are being required to develop
strategies for futureproofing as part of the life cycle planning for key assets and systems that make up infrastructure.
In this paper, we report on a preliminary set of studies aimed at exploring the following issues related to infrastructure
/ infrastructure systems:
• What is intended by the futureproofing of infrastructural assets?
• Why and when to futureproof critical infrastructure?
• How can infrastructure assets and systems be prepared for uncertain futures?
• How can futureproofing be incorporated into asset management practice?
In order to seek answers to the above questions, the Cambridge Centre for Smart Infrastructure and Construction
(CSIC) has conducted two industrial workshops bringing together leading practitioners in the UK infrastructure
and construction sectors, along with government policy makers. This paper provides an initial summary of the
findings from the workshops (part presentation, part working sessions), and proposes a simple framework for linking
futureproofing into broader asset management considerations.
To begin, an overview of futureproofing and motivate the need for futureproofing infrastructure assets is provided.
Following this, an approach to futureproofing infrastructure portfolios is presented that organisations in the
infrastructure sector can use. Key barriers to futureproofing are also presented before examining the ISO 55001 asset
management standard to highlight the interplay between futureproofing and infrastructural asset management. Finally,
different ways by which an effective futureproofing strategy can enhance the value of infrastructure are examined
Recommended from our members
A Markovian model for power transformer maintenance
The condition of the insulation paper is one of the key determinants of the lifetime of a power transformer. The winding
insulation paper may deteriorate aggressively and result in the unexpected failure of power transformers, especially under the
presence of high moisture, oxygen, and metal contaminants. Such types of scenarios can be prevented if the deterioration is
detected on time. Various types of condition monitoring techniques have been developed to detect transformer condition such
as dissolved gas analysis and frequency response analysis. They are non-intrusive and provide early warning of accelerated
deterioration both chemically and mechanically. However, the accuracy of those techniques is imperfect, which means periodic
inspection is still indispensable. In this paper, we discuss the value of continuous condition monitoring for power transformers
and present a way to estimate this value. Towards this, a continuous-time Markov decision model is presented to optimize
periodic inspections, so that the cost is minimized and the availability is maximized. We also analyze the performance based
on the information from both discrete inspection and continuous condition monitoring. The result shows the dissolved gas
analysis can improve the availability and operation cost, while frequency response analysis can only improve the availability
of power transformers.EPSRC, Innovate U
Recommended from our members
Multi-agent system architectures for collaborative prognostics
This paper provides a methodology to assess the optimal Multi-Agent architecture for collaborative prognostics in modern fleets of assets. The use of Multi- Agent Systems has been shown to improve the ability to predict equipment failures by enabling machines with communication and collaborative learning capabilities. Di fferent architectures have been postulated for industrial Multi-Agent Systems in general. A rigorous analysis of the implications of their implementation for collaborative prognostics is essential to guide industrial deployment. In this paper, we investigate the cost and reliability implications of using di fferent Multi-Agent Systems architectures for collaborative failure prediction and maintenance optimization in large fleets of industrial assets. Results show that purely distributed architectures are optimal for high-value assets, while hierarchical architectures optimize communication costs for low-value assets. This enables asset managers to design and implement Multi-Agent systems for predictive maintenance that signi ficantly decrease the whole-life cost of their assets.The project that has generated these results has been supported by a la Caixa Fellowship (ID 100010434), with code LCF/BQ/EU17/11590049. This research was partly supported by Siemens Industrial Turbomachinery UK. This research was also partly supported by the Next Generation Converged Digital Infrastructure project (EP/R004935/1) funded by the Engineering and Physical Sciences Research Council and BT. The server used to perform the experiments in this paper was funded by the Centre for Digital Built Britain
A risk based model for quantifying the impact of information quality
Information quality is one of the key determinants of information system success. When information quality is poor, it can cause a variety of risks in an organization. To manage resources for information quality improvement effectively, it is necessary to understand where, how, and how much information quality impacts an organization's ability to successfully deliver its objectives. So far, existing approaches have mostly focused on the measurement of information quality but not adequately on the impact that information quality causes. This paper presents a model to quantify the business impact that arises through poor information quality in an organization by using a risk based approach. It hence addresses the inherent uncertainty in the relationship between information quality and organizational impact. The model can help information managers to obtain quantitative figures which can be used to build reliable and convincing business cases for information quality improvement.EPSRCThis is the author accepted manuscript. The final version can be found on the publisher's website at: http://www.sciencedirect.com/science/article/pii/S0166361513002467 © 2013 Elsevier B.V. All rights reserved
Recommended from our members
Prioritization of responsive maintenance tasks via machine learning-based inference
Maintenance task prioritization is essential for allocating resources. It is estimated that almost 1/3 of the maintenance cost is wasted to unnecessary activities. Task prioritization is based on risk assessment that takes into account the probability of failure and the criticality of an asset. The criticality analysis is defined by the asset owner based on several parameters, among them safety, downtime cost, productivity, whilst the probability of failure is determined based on deterioration models, regular manual inspections, or installed sensors. Currently, the latter is an extremely complicated and labour intensive procedure, when multiple and different types of assets need to be managed. This paper proposes an innovative method that exploits the advances in mobile communications, social networking, Internet of Things and machine learning to address this shortcoming. This approach brings building elements and assets online using asset tags with an online ‘asset profile’ linked to it. Users of assets are able to scan these tags using a mobile phone app to not only see the information about those assets, but also enter ‘comments’ describing issues and problems on the profiles. These comments are processed through machine learning-based inference methods to estimate the probability that a failure has occurred. This paper validates the proposed method using historical data collected from the Estate Management, of the University of CambridgeInnovate U
Recommended from our members
A value-based approach to optimizing long-term maintenance plans for a multi-asset k-out-of-N system
Devising a long-term maintenance plan for a system of large infrastructure assets is an exacting task. Any maintenance activity that induces system downtime can incur a massive production or service loss. This problem becomes increasingly challenging for a system of which the performance is based on the collective output of assets. Current approaches that optimise each asset in isolation or consider a binary performance relationship insufficiently address this issue because the negligence of performance interactions among assets results in an inaccurate cost estimation. To overcome these hurdles, we formulate a mathematical model that explicitly demonstrates dynamic risk of production loss according to the system aggregate output. Further, we propose an integrated solution method that couples a finite loop search with a Genetic Algorithm. Application of our model to a real-world case study has proved to simultaneously strike the balance between cost and risk. Validated by Monte Carlo simulation, the proposed model has shown to outperform existing approaches. By systematically scheduling maintenance actions over the planning horizon, the resultant strategy has demonstrated to offer considerable maintenance cost savings and significantly prolong the average asset life. Sensitivity analyses also evince the robustness of the proposed model under the volatility in key parameters.EPSRC (This does not appear on the submitted manuscript yet, but will be added in the final proof
Recommended from our members
Comparison of Agent Deployment Strategies for Collaborative Prognosis
Collaborative prognosis is a technique that enables the industrial assets to learn from similar other assets in a fleet, and improve their data-driven prognosis models. When collabo- rative prognosis is implemented in a computationally distributed framework, each asset is monitored by its corresponding Digital Twin agent. Distributed collaborative prognosis is particularly beneficial for high value assets where the communication and the processing costs are negligible compared to the maintenance costs. This paper analyses the effects of Digital Twin deployment strategies on the effectiveness of predictive maintenance activities relying on distributed collaborative prognosis. Distributed and heterarchical multi-agent system architectures are analysed for large fleets of assets, with varying failure rates and noise levels in the failure data. The results show that no single architecture or deployment strategy can be deemed best across all failure rates and noise levels. The conclusion derived in this paper provides guidance to the asset owners to choose the most suitable combination for a given application.Next Generation Converged Digital Infrastructure project (EP/R004935/1) funded by the Engineering and Physical Sciences Research Council and B
Recommended from our members
An Industrial Multi Agent System for real-time distributed collaborative prognostics
Despite increasing interest, real-time prognostics (failure prediction) is still not widespread in industry due to the di fficulties of existing systems to adapt to the dynamic and heterogeneous properties of real asset fleets. In order to
address this, we present an Industrial Multi Agent System for real-time distributed collaborative prognostics. Our system fufil ls all six core properties of Advanced Multi Agent Systems: Distribution, Flexibility, Adaptability, Scalability, Leanness, and Resilience. Experimental examples of each are provided for the case of prognostics using the C-MAPPS engine degradation data set, and data from a fleet of industrial gas turbines. Prognostics are performed using the Weibull Time To Event - Recurrent Neural Network algorithm. Collaboration is achieved by sharing information between agents in the system. We conclude that distributed collaborative prognostics is especially pertinent for systems with presence of sensor faults, limited computing capabilities or significant fleet heterogeneity
Towards Dynamic Criticality-Based Maintenance Strategy for Industrial Assets
An asset’s risk is a useful indicator for determining optimal time of repair/replacement for assets in order to yield minimal operational cost of maintenance. For a successful asset management practice, asset-intensive organisations must understand the risk profile associated with their asset portfolio and how this will change over time. Unfortunately, in many risk-based asset management approaches, the only thing that is known to change in the risk profile of the asset is the likelihood (or probability) of failure. The criticality (or consequences of failure) of asset is assumed to be fixed and has considered as more or less a static quantity that is not updated with sufficient frequency as the operating environment changes. This paper proposes a dynamic criticality-based maintenance approach where asset criticality is modeled as a dynamic quantity and changes in asset’s criticality is used to optimize maintenance plans (e.g. determining the optimal repair time/replacement age for an asset over it life cycle period) to have a better risk management and cost savings. An illustrative example is used to demonstrate the effect of implementing dynamic criticality in determining the optimal time of repair for a bridge infrastructure. It is shown that capturing changes in the criticality of the bridge over time and using this understanding in the risk analysis of the bridge provided the opportunity for better maintenance planning resulting to reduction of the total risk
Federated Data Modeling for Built Environment Digital Twins
The digital twin (DT) approach is an enabler for data-driven decision making in architecture, engineering, construction, and operations. Various open data models that can potentially support the DT developments, at different scales and application domains, can be found in the literature. However, many implementations are based on organization-specific information management processes and proprietary data models, hindering interoperability. This article presents the process and information management approaches developed to generate a federated open data model supporting DT applications. The business process modeling notation and transaction and interaction modeling techniques are applied to formalize the federated DT data modeling framework, organized in three main phases: requirements definition, federation, validation and improvement. The proposed framework is developed adopting the cross-disciplinary and multiscale principles. A validation on the development of the federated building-level DT data model for the West Cambridge Campus DT research facility is conducted. The federated data model is used to enable DT-based asset management applications at the building and built environment levels
- …