705 research outputs found

    APPLICATION OF TIMES SERIES ANALYSIS IN FORECASTING FUTURE PERFORMANCE OF A RESERVOIR UNDER WATER INJECTION

    Get PDF
    Decline Curve Analysis (DCA) and History Matching (HM) are classical methods used in predicting reservoir performance. While both methods are widely used, they have certain limitations and strengths. DCA is only applicable for reservoir with primary drive and assumes that all mechanical conditions of a well remain constant. HM, on the other hand, is very complex, takes longer time, and require experience. Hence, a new simpler and faster technique is required. In this work, a technique called Time Series Analysis (TSA) is proposed for predicting the reservoir performance. Time series analysis is widely used in predicting future patterns in economics and weather forecasting, where factors influencing output are too many to consider. Other examples of the application of time series analysis are prediction of equipment prognostic and process of quality control. Two types of TSA were tested: Output-Error (OE) and Box-Jenkins (BJ). Eight models are developed by varying the order of each models. Two of the best models were chosen based on the resulting normalized root mean square error (NRMSE) and are compared with the conventional reservoir forecasting methods. The NRMSE from the selected models, OE (1-2-1) and BJ (1-2- 1-2-1), showed a comparable result with DCA and HM. The result of this study shows that, TSA has a very good potential for use in reservoir performance prediction under water injection and hence it can be utilized as alternative reservoir forecasting too

    Tunnel Engineering

    Get PDF
    This volume presents a selection of chapters covering a wide range of tunneling engineering topics. The scope was to present reviews of established methods and new approaches in construction practice and in digital technology tools like building information modeling. The book is divided in four sections dealing with geological aspects of tunneling, analysis and design, new challenges in tunnel construction, and tunneling in the digital era. Topics from site investigation and rock mass failure mechanisms, analysis and design approaches, and innovations in tunnel construction through digital tools are covered in 10 chapters. The references provided will be useful for further reading

    Deep Space Network information system architecture study

    Get PDF
    The purpose of this article is to describe an architecture for the Deep Space Network (DSN) information system in the years 2000-2010 and to provide guidelines for its evolution during the 1990s. The study scope is defined to be from the front-end areas at the antennas to the end users (spacecraft teams, principal investigators, archival storage systems, and non-NASA partners). The architectural vision provides guidance for major DSN implementation efforts during the next decade. A strong motivation for the study is an expected dramatic improvement in information-systems technologies, such as the following: computer processing, automation technology (including knowledge-based systems), networking and data transport, software and hardware engineering, and human-interface technology. The proposed Ground Information System has the following major features: unified architecture from the front-end area to the end user; open-systems standards to achieve interoperability; DSN production of level 0 data; delivery of level 0 data from the Deep Space Communications Complex, if desired; dedicated telemetry processors for each receiver; security against unauthorized access and errors; and highly automated monitor and control

    Fault Prediction Based on Leakage Current in Contaminated Insulators Using Enhanced Time Series Forecasting Models

    Get PDF
    To improve the monitoring of the electrical power grid, it is necessary to evaluate the influence of contamination in relation to leakage current and its progression to a disruptive discharge. In this paper, insulators were tested in a saline chamber to simulate the increase of salt contamination on their surface. From the time series forecasting of the leakage current, it is possible to evaluate the development of the fault before a flashover occurs. In this paper, for a complete evaluation, the long short-term memory (LSTM), group method of data handling (GMDH), adaptive neuro-fuzzy inference system (ANFIS), bootstrap aggregation (bagging), sequential learning (boosting), random subspace, and stacked generalization (stacking) ensemble learning models are analyzed. From the results of the best structure of the models, the hyperparameters are evaluated and the wavelet transform is used to obtain an enhanced model. The contribution of this paper is related to the improvement of well-established models using the wavelet transform, thus obtaining hybrid models that can be used for several applications. The results showed that using the wavelet transform leads to an improvement in all the used models, especially the wavelet ANFIS model, which had a mean RMSE of 1.58 × 10−3, being the model that had the best result. Furthermore, the results for the standard deviation were 2.18 × 10−19, showing that the model is stable and robust for the application under study. Future work can be performed using other components of the distribution power grid susceptible to contamination because they are installed outdoors.N/

    An Analog VLSI Deep Machine Learning Implementation

    Get PDF
    Machine learning systems provide automated data processing and see a wide range of applications. Direct processing of raw high-dimensional data such as images and video by machine learning systems is impractical both due to prohibitive power consumption and the “curse of dimensionality,” which makes learning tasks exponentially more difficult as dimension increases. Deep machine learning (DML) mimics the hierarchical presentation of information in the human brain to achieve robust automated feature extraction, reducing the dimension of such data. However, the computational complexity of DML systems limits large-scale implementations in standard digital computers. Custom analog signal processing (ASP) can yield much higher energy efficiency than digital signal processing (DSP), presenting means of overcoming these limitations. The purpose of this work is to develop an analog implementation of DML system. First, an analog memory is proposed as an essential component of the learning systems. It uses the charge trapped on the floating gate to store analog value in a non-volatile way. The memory is compatible with standard digital CMOS process and allows random-accessible bi-directional updates without the need for on-chip charge pump or high voltage switch. Second, architecture and circuits are developed to realize an online k-means clustering algorithm in analog signal processing. It achieves automatic recognition of underlying data pattern and online extraction of data statistical parameters. This unsupervised learning system constitutes the computation node in the deep machine learning hierarchy. Third, a 3-layer, 7-node analog deep machine learning engine is designed featuring online unsupervised trainability and non-volatile floating-gate analog storage. It utilizes massively parallel reconfigurable current-mode analog architecture to realize efficient computation. And algorithm-level feedback is leveraged to provide robustness to circuit imperfections in analog signal processing. At a processing speed of 8300 input vectors per second, it achieves 1×1012 operation per second per Watt of peak energy efficiency. In addition, an ultra-low-power tunable bump circuit is presented to provide similarity measures in analog signal processing. It incorporates a novel wide-input-range tunable pseudo-differential transconductor. The circuit demonstrates tunability of bump center, width and height with a power consumption significantly lower than previous works

    Knowledge-defined networking : a machine learning based approach for network and traffic modeling

    Get PDF
    The research community has considered in the past the application of Machine Learning (ML) techniques to control and operate networks. A notable example is the Knowledge Plane proposed by D.Clark et al. However, such techniques have not been extensively prototyped or deployed in the field yet. In this thesis, we explore the reasons for the lack of adoption and posit that the rise of two recent paradigms: Software-Defined Networking (SDN) and Network Analytics (NA), will facilitate the adoption of ML techniques in the context of network operation and control. We describe a new paradigm that accommodates and exploits SDN, NA and ML, and provide use-cases that illustrate its applicability and benefits. We also present some relevant use-cases, in which ML tools can be useful. We refer to this new paradigm as Knowledge-Defined Networking (KDN). In this context, ML can be used as a network modeling technique to build models that estimate the network performance. Network modeling is a central technique to many networking functions, for instance in the field of optimization. One of the objective of this thesis is to provide an answer to the following question: Can neural networks accurately model the performance of a computer network as a function of the input traffic?. In this thesis, we focus mainly on modeling the average delay, but also on estimating the jitter and the packets lost. For this, we assume the network as a black-box that has as input a traffic matrix and as output the desired performance matrix. Then we train different regressors, including deep neural networks, and evaluate its accuracy under different fundamental network characteristics: topology, size, traffic intensity and routing. Moreover, we also study the impact of having multiple traffic flows between each pair of nodes. We also explore the use of ML techniques in other network related fields. One relevant application is traffic forecasting. Accurate forecasting enables scaling up or down the resources to efficiently accommodate the load of traffic. Such models are typically based on traditional time series ARMA or ARIMA models. We propose a new methodology that results from the combination of an ARIMA model with an ANN. The Neural Network greatly improves the ARIMA estimation by modeling complex and nonlinear dependencies, particularly for outliers. In order to train the Neural Network and to improve the outliers estimation, we use external information: weather, events, holidays, etc. The main hypothesis is that network traffic depends on the behavior of the end-users, which in turn depend on external factors. We evaluate the accuracy of our methodology using real-world data from an egress Internet link of a campus network. The analysis shows that the model works remarkably well, outperforming standard ARIMA models. Another relevant application is in the Network Function Virtualization (NFV). The NFV paradigm makes networks more flexible by using Virtual Network Functions (VNF) instead of dedicated hardware. The main advantage is the flexibility offered by these virtual elements. However, the use of virtual nodes increases the difficulty of modeling such networks. This problem may be addressed by the use of ML techniques, to model or to control such networks. As a first step, we focus on the modeling of the performance of single VNFs as a function of the input traffic. In this thesis, we demonstrate that the CPU consumption of a VNF can be estimated only as a function of the input traffic characteristics.L'aplicació de tècniques d'aprenentatge automàtic (ML) pel control i operació de xarxes informàtiques ja s'ha plantejat anteriorment per la comunitat científica. Un exemple important és "Knowledge Plane", proposat per D. Clark et al. Tot i això, aquestes propostes no s'han utilitzat ni implementat mai en aquest camp. En aquesta tesi, explorem els motius que han fet impossible l'adopció fins al present, i que ara en permeten la implementació. El principal motiu és l'adopció de dos nous paradigmes: Software-Defined Networking (SDN) i Network Analytics (NA), que permeten la utilització de tècniques d'aprenentatge automàtic en el context de control i operació de xarxes informàtiques. En aquesta tesi, es descriu aquest paradigma, que aprofita les possibilitats ofertes per SDN, per NA i per ML, i s'expliquen aplicacions en el món de la informàtica i les comunicacions on l'aplicació d'aquestes tècniques poden ser molt beneficioses. Hem anomenat a aquest paradigma Knowledge-Defined Networking (KDN). En aquest context, una de les aplicacions de ML és el modelatge de xarxes informàtiques per estimar-ne el comportament. El modelatge de xarxes és un camp de recerca important el aquest camp, i que permet, per exemple, optimitzar-ne el seu rendiment. Un dels objectius de la tesi és respondre la següent pregunta: Pot una xarxa neuronal modelar de manera acurada el comportament d'una xarxa informàtica en funció del tràfic d'entrada? Aquesta tesi es centra principalment en el modelatge del retard mig (temps entre que s'envia i es rep un paquet). També s'estudia com varia aquest retard (jitter) i el nombre de paquets perduts. Per fer-ho, s'assumeix que la xarxa és totalment desconeguda i que només es coneix la matriu de tràfic d'entrada i la matriu de rendiment com a sortida. Es fan servir diferents tècniques de ML, com ara regressors lineals i xarxes neuronals, i se n'avalua la precisió per diferents xarxes i diferents configuracions de xarxa i tràfic. Finalment, també s'estudia l'impacte de tenir múltiples fluxos entre els parells de nodes. En la tesi, també s'explora l'ús de tècniques d¿aprenentatge automàtic en altres camps relacionats amb les xarxes informàtiques. Un cas rellevant és la predicció de tràfic. Una bona estimació del tràfic permet preveure la utilització dels diversos elements de la xarxa i optimitzar-ne el seu rendiment. Les tècniques tradicionals de predicció de tràfic es basen en tècniques de sèries temporals, com ara models ARMA o ARIMA. En aquesta tesis es proposa una nova metodologia que combina un model ARIMA amb una xarxa neuronal. La xarxa neuronal millora la predicció dels valors atípics, que tenen comportament complexos i no lineals. Per fer-ho, s'incorpora a l'anàlisi l'ús d'informació externa, com ara: informació meteorològica, esdeveniments, vacances, etc. La hipòtesi principal és que el tràfic de xarxes informàtiques depèn del comportament dels usuaris finals, que a la vegada depèn de factors externs. Per això, s'avalua la precisió de la metodologia presentada fent servir dades reals d'un enllaç de sortida de la xarxa d'un campus. S'observa que el model presentat funciona bé, superant la precisió de models ARIMA estàndards. Una altra aplicació important és en el camp de Network Function Virtualization (NFV). El paradigma de NFV fa les xarxes més flexibles gràcies a l'ús de Virtual Network Functions (VNF) en lloc de dispositius específics. L'avantatge principal és la flexibilitat que ofereixen aquests elements virtuals. Per contra, l'ús de nodes virtuals augmenta la dificultat de modelar aquestes xarxes. Aquest problema es pot estudiar també mitjançant tècniques d'aprenentatge automàtic, tant per modelar com per controlar la xarxa. Com a primer pas, aquesta tesi es centra en el modelatge del comportament de VNFs treballant soles en funció del tràfic que processen. Concretament, es demostra que el consum de CPU d'una VNF es pot estimar a partir a partir de diverses característiques del tràfic d'entrada.Postprint (published version

    Forecasting project success in the construction industry using adaptive neuro-fuzzy inference system

    Get PDF
    Project managers often find it a challenge to successfully manage construction projects. As a result, understanding, evaluating, and achieving project success are critical for sponsors to control projects. In practice, determining key success factors and criteria to assess the performance of construction projects and forecast the success of new projects is difficult. To address these concerns, our objective is to go beyond the efficiency-oriented project success criteria by considering both efficiency- and effectiveness-oriented measures to evaluate project success. This paper contributes to existing knowledge by identifying a holistic and multidimensional set of project success factors and criteria using a two-round Delphi technique. We developed a decision support system using the Adaptive Neuro-Fuzzy Inference System (ANFIS) to forecast the success of mid- and large-sized construction projects. We gathered data from 142 project managers in Australia and New Zealand to implement the developed ANFIS. We then validated the constructed ANFIS using the K-fold cross-validation procedure and a real case study of a large construction project in Western Australia. The forecasting accuracy measures R2=0.97461, MAPE = 2.57912%, MAE = 1.88425, RMSE = 2.3610, RRMSE = 0.03149, and PI = 0.01589 suggest that the developed ANFIS is a very good predictor of project success
    corecore