232 research outputs found

    Detection and Localisation of Pipe Bursts in a District Metered Area Using an Online Hydraulic Model

    Get PDF
    This thesis presents a research work on the development of new methodology for near-real-time detection and localisation of pipe bursts in a Water Distribution System (WDS) at the District Meters Area (DMA) level. The methodology makes use of online hydraulic model coupled with a demand forecasting methodology and several statistical techniques to process the hydraulic meters data (i.e., flows and pressures) coming from the field at regular time intervals (i.e. every 15 minutes). Once the detection part of the methodology identifies a potential burst occurrence in a system it raises an alarm. This is followed by the application of the burst localisation methodology to approximately locate the event within the District Metered Area (DMA). The online hydraulic model is based on data assimilation methodology coupled with a short-term Water Demand Forecasting Model (WDFM) based on Multi-Linear Regression. Three data assimilation methods were tested in the thesis, namely the iterative Kalman Filter method, the Ensemble Kalman Filter method and the Particle Filter method. The iterative Kalman Filter (i-KF) method was eventually chosen for the online hydraulic model based on the best overall trade-off between water system state prediction accuracy and computational efficiency. The online hydraulic model created this way was coupled with the Statistical Process Control (SPC) technique and a newly developed burst detection metric based on the moving average residuals between the predicted and observed hydraulic states (flows/pressures). Two new SPC-based charts with associated generic set of control rules for analysing burst detection metric values over consecutive time steps were introduced to raise burst alarms in a reliable and timely fashion. The SPC rules and relevant thresholds were determined offline by performing appropriate statistical analysis of residuals. The above was followed by the development of the new methodology for online burst localisation. The methodology integrates the information on burst detection metric values obtained during the detection stage with the new sensitivity matrix developed offline and hydraulic model runs used to simulate potential bursts to identify the most likely burst location in the pipe network. A new data algorithm for estimating the ‘normal’ DMA demand and burst flow during the burst period is developed and used for localisation. A new data algorithm for statistical analysis of flow and pressure data was also developed and used to determine the approximate burst area by producing a list of top ten suspected burst location nodes. The above novel methodologies for burst detection and localisation were applied to two real-life District Metred Areas in the United Kingdom (UK) with artificially generated flow and pressure observations and assumed bursts. The results obtained this way show that the developed methodology detects pipe bursts in a reliable and timely fashion, provides good estimate of a burst flow and accurately approximately locates the burst within a DMA. In addition, the results obtained show the potential of the methodology described here for online burst detection and localisation in assisting Water Companies (WCs) to conserve water, save energy and money. It can also enhance the UK WCs’ profile customer satisfaction, improve operational efficiency and improve the OFWAT’s Service Incentive Mechanism (SIM) scores.This STREAM project is funded by the Engineering and Physical Sciences Research Council and Industrial Collaborator, United Utilities

    A framework for automation of data recording, modelling, and optimal statistical control of production lines

    Get PDF
    Unarguably, the automation of data collection and subsequent statistical treatment enhance the quality of industrial management systems. The rise of accessible digital technologies has enabled the introduction of the Industry 4.0 pillars in Cariri local companies. Particularly, such practice positively contributes to the triple bottom line of sustainable development: People, Environment, and Economy. The present work aims to provide a general automated framework for data recording and statistical control of conveyor belts in production lines. The software has been developed in three layers: graphical user interface, in PHP language; database collection, search, and safeguard, in MySQL; computational statistics, in R; and hardware control, in C. The computational statistics are based on the combination of artificial neural nets and autoregressive integrated and moving average models, via minimal variance method. The hardware components are composed by open source hardware as Arduino based boards and modular or industrial sensors. Specifically, the embedded system is designed to constantly monitor and record a number of measurable characteristics of the conveyor belts (e.g. electric consumption and temperature), via a number of sensors, allowing both the computation of statistical control metrics and the evaluation of the quality of the production system. As a case study, the project makes use of a laminated limestone production line, located at the Mineral Technology Center, Nova Olinda, Ceará state, Brazil.Indiscutivelmente, a automação da coleta de dados e o subsequente tratamento estatístico aumentam a qualidade dos sistemas de gestão industrial. O surgimento de tecnologias digitais acessíveis possibilitou a introdução dos pilares da Indústria 4.0 nas empresas locais do Cariri. Particularmente, tal prática contribui positivamente para o triplo resultado do desenvolvimento sustentável: Pessoas, Meio Ambiente e Economia. O presente trabalho tem como objetivo fornecer um Framework geral automatizado para registro de dados e controle estatístico de esteiras transportadoras em linhas de produção. O software foi desenvolvido em três camadas: interface gráfica do usuário, em linguagem PHP; coleta, pesquisa e proteção de banco de dados em MySQL; estatística computacional, em R; e controle de hardware, em C. As estatísticas computacionais são baseadas na combinação de redes neurais artificiais e modelos autorregressivos integrados e de média móvel, via método de mínima variância. Os componentes de hardware são compostos por hardware open source como placas baseadas em Arduino e sensores modulares ou industriais. Especificamente, o sistema embarcado é projetado para monitorar e registrar constantemente uma série de características mensuráveis das esteiras transportadoras (por exemplo, consumo elétrico e temperatura), por meio de uma série de sensores, permitindo tanto o cálculo de métricas de controle estatístico quanto a avaliação da qualidade do sistema de produção. Como estudo de caso, o projeto utiliza uma linha de produção de calcário laminado, localizada no Centro de Tecnologia Mineral, Nova Olinda, Ceará, Brasil

    Improvement of the demand forecasting methods for vehicle parts at an international automotive company.

    Get PDF
    This study aims to improve the forecasting accuracy for the monthly material flows of an area forwarding based inbound logistics network for an international automotive company. Due to human errors, short-term changes in material requirements or data bases desynchronization the Material Requirement Planning (MRP) cannot be directly derived from the Master Production Schedule (MPS). Therefore, the inbound logistics flows are forecast. The current research extends the forecasting methods¿ scope already applied by the company namely, Naïve, ARIMA, Neural Networks, Exponential Smoothing and Ensemble Forecast (an average of the first four methods) by allowing the implementation of three new algorithms: The Prophet Algorithm, the Vector Autoregressive (Multivariate Time Series) and Automated Simple Moving Average, and two new data cleaning methods: Automated Outlier Detection and Linear Interpolation. All the methods are structured in a software using the programming language R. The results show that as of April 2018, 80.1% of all material flows have a Mean Absolute Percentage Error (MAPE) of less than or equal to 20%, in comparison with the 58.6% of all material flows which had the same behavior in the original software in February 2018. Furthermore, the three new algorithms represent now 29% of all forecasts. All the analysis realized in this research were made with actual data from the company, and the upgraded software was approved by the logistics analysts to make all future material flow forecasts.PregradoINGENIERO(A) EN INDUSTRIA

    Integrating Multiobjective Optimization With The Six Sigma Methodology For Online Process Control

    Get PDF
    Over the past two decades, the Define-Measure-Analyze-Improve-Control (DMAIC) framework of the Six Sigma methodology and a host of statistical tools have been brought to bear on process improvement efforts in today’s businesses. However, a major challenge of implementing the Six Sigma methodology is maintaining the process improvements and providing real-time performance feedback and control after solutions are implemented, especially in the presence of multiple process performance objectives. The consideration of a multiplicity of objectives in business and process improvement is commonplace and, quite frankly, necessary. However, balancing the collection of objectives is challenging as the objectives are inextricably linked, and, oftentimes, in conflict. Previous studies have reported varied success in enhancing the Six Sigma methodology by integrating optimization methods in order to reduce variability. These studies focus these enhancements primarily within the Improve phase of the Six Sigma methodology, optimizing a single objective. The current research and practice of using the Six Sigma methodology and optimization methods do little to address the real-time feedback and control for online process control in the case of multiple objectives. This research proposes an innovative integrated Six Sigma multiobjective optimization (SSMO) approach for online process control. It integrates the Six Sigma DMAIC framework with a nature-inspired optimization procedure that iteratively perturbs a set of decision variables providing feedback to the online process, eventually converging to a set of tradeoff process configurations that improves and maintains process stability. For proof of concept, the approach is applied to a general business process model – a well-known inventory management model – that is formally defined and specifies various process costs as objective functions. The proposed iv SSMO approach and the business process model are programmed and incorporated into a software platform. Computational experiments are performed using both three sigma (3σ)-based and six sigma (6σ)-based process control, and the results reveal that the proposed SSMO approach performs far better than the traditional approaches in improving the stability of the process. This research investigation shows that the benefits of enhancing the Six Sigma method for multiobjective optimization and for online process control are immense

    Temporally adaptive monitoring procedures with applications in enterprise cyber-security

    Get PDF
    Due to the perpetual threat of cyber-attacks, enterprises must employ and develop new methods of detection as attack vectors evolve and advance. Enterprise computer networks produce a large volume and variety of data including univariate data streams, time series and network graph streams. Motivated by cyber-security, this thesis develops adaptive monitoring tools for univariate and network graph data streams, however, they are not limited to this domain. In all domains, real data streams present several challenges for monitoring including trend, periodicity and change points. Streams often also have high volume and frequency. To deal with the non-stationarity in the data, the methods applied must be adaptive. Adaptability in the proposed procedures throughout the thesis is introduced using forgetting factors, weighting the data accordingly to recency. Secondly, methods applied must be computationally fast with a small or fixed computation burden and fixed storage requirements for timely processing. Throughout this thesis, sequential or sliding window approaches are employed to achieve this. The first part of the thesis is centred around univariate monitoring procedures. A sequential adaptive parameter estimator is proposed using a Bayesian framework. This procedure is then extended for multiple change point detection, where, unlike existing change point procedures, the proposed method is capable of detecting abrupt changes in the presence of trend. We additionally present a time series model which combines short-term and long-term behaviours of a series for improved anomaly detection. Unlike existing methods which primarily focus on point anomalies detection (extreme outliers), our method is capable of also detecting contextual anomalies, when the data deviates from persistent patterns of the series such as seasonality. Finally, a novel multi-type relational clustering methodology is proposed. As multiple relations exist between the different entities within a network (computers, users and ports), multiple network graphs can be generated. We propose simultaneously clustering over all graphs to produce a single clustering for each entity using Non-Negative Matrix Tri-Factorisation. Through simplifications, the proposed procedure is fast and scalable for large network graphs. Additionally, this methodology is extended for graph streams. This thesis provides an assortment of tools for enterprise network monitoring with a focus on adaptability and scalability making them suitable for intrusion detection and situational awareness.Open Acces

    A Survey on Concept Drift Adaptation

    Get PDF
    Concept drift primarily refers to an online supervised learning scenario when the relation between the in- put data and the target variable changes over time. Assuming a general knowledge of supervised learning in this paper we characterize adaptive learning process, categorize existing strategies for handling concept drift, discuss the most representative, distinct and popular techniques and algorithms, discuss evaluation methodology of adaptive algorithms, and present a set of illustrative applications. This introduction to the concept drift adaptation presents the state of the art techniques and a collection of benchmarks for re- searchers, industry analysts and practitioners. The survey aims at covering the different facets of concept drift in an integrated way to reflect on the existing scattered state-of-the-art

    A study of comparative forecasting.

    Get PDF
    During the past two decades, there has been an increasing number of comparative forecasting studies. The objective of these studies is to compare different forecasting methodologies with the hope of finding the best methodology. These studies have led to conflicting reports and controversies.This dissertation examines almost all published comparative studies and delineates a list of fallacies occurring in comparative forecasting studies. These fallacies most commonly give rise to the existing controversies. Since the controversies in forecasting stem from comparisons of the various approaches, a brief synopsis of the most currently employed univariate and multivariate methodologies are presented

    The econometrics of structural change: statistical analysis and forecasting in the context of the South African economy

    Get PDF
    Philosophiae Doctor - PhDOne of the assumptions of conventional regression analysis is I that the parameters are constant over all observations. It has often been suggested that this may not be a valid assumption to make, particularly if the econometric model is to be used for economic forecasting0 Apart from this it is also found that econometric models, in particular, are used to investigate the underlying interrelationships of the system under consideration in order to understand and to explain relevant phenomena in structural analysis. The pre-requisite of such use of econometrics is that the regression parameters of the model is assumed to be constant over time or across different crosssectional units

    Statistical Analysis and Forecasting of Economic Structural Change

    Get PDF
    In 1984, the University of Bonn (FRG) and IIASA created a joint research group to analyze the relationship between economic growth and structural change. The research team was to examine the commodity composition as well as the size and direction of commodity and credit flows among countries and regions. Krelle (1988) reports on the results of this "Bonn-IIASA" research project. At the same time, an informal IIASA Working Group was initiated to deal with problems of the statistical analysis of economic data in the context of structural change: What tools do we have to identify nonconstancy of model parameters? What type of models are particularly applicable to nonconstant structure? How is forecasting affected by the presence of nonconstant structure? What problems should be anticipated in applying these tools and models? Some 50 experts, mainly statisticians or econometricians from about 15 countries, came together in Lodz, Poland (May 1985); Berlin, GDR (June 1986); and Sulejov, Poland (September 1986) to present and discuss their findings. This volume contains a selected set of those conference contributions as well as several specially invited chapters. The introductory chapter "What can statistics contribute to the analysis of economic structural change?", discusses not only the role of statistics in the detection and assimilation of structural changes, but also the relevance of respective methods in the evaluation of econometric models. Trends in the development of these methods are indicated, and the contributions to the present volume are put into a broader context of empirical economics to help to bridge the gap between economists and statisticians. The chapters in the first section are concerned with the detection of parameter nonconstancy. The procedures discussed range from classical methods, such as the CUSUM test, to new concepts, particularly those based on nonparametric statistics. Several chapters assess the conditions under which these methods can be applied and their robustness under such conditions. The second section addresses models that are in some sense generalizations of nonconstant-parameter models, so that they can assimilate structural changes. The last section deals with real-life structural change situations
    • …
    corecore