11 research outputs found

    Adaptive swarm optimisation assisted surrogate model for pipeline leak detection and characterisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion and weld defects. It is difficult to avoid pipeline leakage as the sources of leaks are diverse. Various pipeline leakage detection methods, including fibre optic, pressure point analysis and numerical modelling, have been proposed during the last decades. One major issue of these methods is distinguishing the leak signal without giving false alarms. Considering that the data obtained by these traditional methods are digital in nature, the machine learning model has been adopted to improve the accuracy of pipeline leakage detection. However, most of these methods rely on a large training dataset for accurate training models. It is difficult to obtain experimental data for accurate model training. Some of the reasons include the huge cost of an experimental setup for data collection to cover all possible scenarios, poor accessibility to the remote pipeline, and labour-intensive experiments. Moreover, datasets constructed from data acquired in laboratory or field tests are usually imbalanced, as leakage data samples are generated from artificial leaks. Computational fluid dynamics (CFD) offers the benefits of providing detailed and accurate pipeline leakage modelling, which may be difficult to obtain experimentally or with the aid of analytical approach. However, CFD simulation is typically time-consuming and computationally expensive, limiting its pertinence in real-time applications. In order to alleviate the high computational cost of CFD modelling, this study proposed a novel data sampling optimisation algorithm, called Adaptive Particle Swarm Optimisation Assisted Surrogate Model (PSOASM), to systematically select simulation scenarios for simulation in an adaptive and optimised manner. The algorithm was designed to place a new sample in a poorly sampled region or regions in parameter space of parametrised leakage scenarios, which the uniform sampling methods may easily miss. This was achieved using two criteria: population density of the training dataset and model prediction fitness value. The model prediction fitness value was used to enhance the global exploration capability of the surrogate model, while the population density of training data samples is beneficial to the local accuracy of the surrogate model. The proposed PSOASM was compared with four conventional sequential sampling approaches and tested on six commonly used benchmark functions in the literature. Different machine learning algorithms are explored with the developed model. The effect of the initial sample size on surrogate model performance was evaluated. Next, pipeline leakage detection analysis - with much emphasis on a multiphase flow system - was investigated in order to find the flow field parameters that provide pertinent indicators in pipeline leakage detection and characterisation. Plausible leak scenarios which may occur in the field were performed for the gas-liquid pipeline using a three-dimensional RANS CFD model. The perturbation of the pertinent flow field indicators for different leak scenarios is reported, which is expected to help in improving the understanding of multiphase flow behaviour induced by leaks. The results of the simulations were validated against the latest experimental and numerical data reported in the literature. The proposed surrogate model was later applied to pipeline leak detection and characterisation. The CFD modelling results showed that fluid flow parameters are pertinent indicators in pipeline leak detection. It was observed that upstream pipeline pressure could serve as a critical indicator for detecting leakage, even if the leak size is small. In contrast, the downstream flow rate is a dominant leakage indicator if the flow rate monitoring is chosen for leak detection. The results also reveal that when two leaks of different sizes co-occur in a single pipe, detecting the small leak becomes difficult if its size is below 25% of the large leak size. However, in the event of a double leak with equal dimensions, the leak closer to the pipe upstream is easier to detect. The results from all the analyses demonstrate the PSOASM algorithm's superiority over the well-known sequential sampling schemes employed for evaluation. The test results show that the PSOASM algorithm can be applied for pipeline leak detection with limited training datasets and provides a general framework for improving computational efficiency using adaptive surrogate modelling in various real-life applications

    Pipeline leakage detection and characterisation with adaptive surrogate modelling using particle swarm optimisation.

    Get PDF
    Pipelines are often subject to leakage due to ageing, corrosion, and weld defects, and it is difficult to avoid as the sources of leakages are diverse. Several studies have demonstrated the applicability of the machine learning model for the timely prediction of pipeline leakage. However, most of these studies rely on a large training data set for training accurate models. The cost of collecting experimental data for model training is huge, while simulation data is computationally expensive and time-consuming. To tackle this problem, the present study proposes a novel data sampling optimisation method, named adaptive particle swarm optimisation (PSO) assisted surrogate model, which was used to train the machine learning models with a limited dataset and achieved good accuracy. The proposed model incorporates the population density of training data samples and model prediction fitness to determine new data samples for improved model fitting accuracy. The proposed method is applied to 3-D pipeline leakage detection and characterisation. The result shows that the predicted leak sizes and location match the actual leakage. The significance of this study is two-fold: the practical application allows for pipeline leak prediction with limited training samples and provides a general framework for computational efficiency improvement using adaptive surrogate modelling in various real-life applications

    An Embedded Fuzzy Logic Based Application for Density Traffic Control System

    Get PDF
    The control of density traffic at cross junction road usually manned by human efforts or implementation of automatic traffic light system. This system seem and proves to be inefficient with some challenges. The major constraints of this traffic control are as a result of the inability of most traffic control systems to assign appropriate waiting time for vehicles based on the lane density. Also with little or no consideration for pedestrians, emergency and security agents priorities. In view of this, an intelligent density traffic control system using  (fuzzy logic) which is capable of providing priority to the road users based on the density and emergency situations was developed and presented in this paper. This system will obtain the approximate amount of vehicle and presence of pedestrians respectfully on each lane with help of Infrared Sensors (IR) and siren detection system for emergency and security road users. The working principle of this system depending on the logic inputs rules given into the processing unit by the (sensors, S1 and S2) which helps the system to generates a timing sequence that best suit the number of vehicles and pedestrians available on the lane at point in time

    Recent Advances in Pipeline Monitoring and Oil Leakage Detection Technologies: Principles and Approaches

    Get PDF
    Pipelines are widely used for the transportation of hydrocarbon fluids over millions of miles all over the world. The structures of the pipelines are designed to withstand several environmental loading conditions to ensure safe and reliable distribution from point of production to the shore or distribution depot. However, leaks in pipeline networks are one of the major causes of innumerable losses in pipeline operators and nature. Incidents of pipeline failure can result in serious ecological disasters, human casualties and financial loss. In order to avoid such menace and maintain safe and reliable pipeline infrastructure, substantial research efforts have been devoted to implementing pipeline leak detection and localisation using different approaches. This paper discusses pipeline leakage detection technologies and summarises the state-of-the-art achievements. Different leakage detection and localisation in pipeline systems are reviewed and their strengths and weaknesses are highlighted. Comparative performance analysis is performed to provide a guide in determining which leak detection method is appropriate for particular operating settings. In addition, research gaps and open issues for development of reliable pipeline leakage detection systems are discussed. Document type: Articl

    Recent advances in pipeline monitoring and oil leakage detection technologies: principles and approaches.

    Get PDF
    Pipelines are widely used for the transportation of hydrocarbon fluids over millions of miles all over the world. The structures of the pipelines are designed to withstand several environmental loading conditions to ensure safe and reliable distribution from point of production to the shore or distribution depot. However, leaks in pipeline networks are one of the major causes of innumerable losses in pipeline operators and nature. Incidents of pipeline failure can result in serious ecological disasters, human casualties and financial loss. In order to avoid such menace and maintain safe and reliable pipeline infrastructure, substantial research efforts have been devoted to implementing pipeline leak detection and localisation using different approaches. This paper discusses pipeline leakage detection technologies and summarises the state-of-the-art achievements. Different leakage detection and localisation in pipeline systems are reviewed and their strengths and weaknesses are highlighted. Comparative performance analysis is performed to provide a guide in determining which leak detection method is appropriate for particular operating settings. In addition, research gaps and open issues for development of reliable pipeline leakage detection systems are discussed

    PERFORMANCE EVALUATION OF MULTIPLE TRANSFORM WATERMARKING SYSTEM FOR PRIVACY PROTECTION OF MEDICAL DATA USING PSNR AND NC

    Get PDF
    This paper presents the performance evaluation of a developed multiple transform watermarking system for privacy protection of medical data using Peak Signal to Noise Ratio (PSNR) and Normalized Cross-Correlation (NC). The PSNR was used to evaluate the imperceptibility of the system, while the NC was used to evaluate the robustness of the system under the various attacks include: Gaussian noise, pepper and salt noise, sharp enhancing, image cutting, image compression, low pass filter and image rotation. The obtained result showed that the similarity between original image and watermarked image has PSNR of 52.4595dB as compared to the existing system of 50.0285dB. This indicates that the proposed scheme can conceal the watermark better, and as well retains the image quality

    EXPERIMENTAL PERFORMANCE EVALUATION AND FEASIBILITY STUDY OF 6LoWPAN BASED INTERNET OF THINGS

    No full text
    Nowadays, demand for low power, small, mobile and flexible computing machines that interconnects are growing rapidly. This study highlights internet of things (IoT) model regarding sensor node discovery and IPV6 framework using 6LoWPAN. Contiki network simulator (cooja) was used to examine the performance of the proposed network. The simulator was chosen because it provides good graphical user interface environment and allow rapid simulation setup found to be best in simulating network involving 6LoWPAN. Three experiments were carry out with the network topology designed to have 3, 7 and 5 motes respectively. The parameters considered in the simulation were throughput and packet loss which were examined using packet generation rate of 1 to 50 packet/sec with a constant delay. GET requests was sent to the humidity and temperature sensor motes running CoAP servers, and the corresponding throughput were observed in each case per experiment, it was observed that there was a 10 packet per second increase before it finally dropped This was because of the packet loss due to the increase in traffic. GET request was sent to motes to obtain the packet loss and the packet that were not acknowledged determined the packet loss. In this study, the performance of the proposed model in terms of throughput and packet loss was studied and the expected results will aid in planning 6LoWPAN network, A transition flow diagram was evolved for this work to represent packet routing process

    Modeling and implementation of smart home and self-control window using FPGA and Petri Net.

    No full text
    The function of the window is to provide comfort for the householders by regulating the indoor environment. However, most of the residence windows are still controlled manually. Although, a quite number of automated windows based on the Internet of Things (IoT) has been proposed in the literature, yet a smart windows control with multi-objectives functions related to the environmental factors remain open gap. This paper aims to contribute to this area by developing a cyber-physical system (CPS) for smart room and windows automation control (SRWAC) using set of rules generated from the Petri Net simulator for determining the system response to the inputs data sensed from the indoor and outdoor sensing units (temperature, dust, rain, and carbon monoxide sensors) in coding Atmega 328 controller. We also simulate the window controller using Field Programmable Gate Array (FPGA) in Xilinx ISE. The results of the system tested show that automated response of the indoor and outdoor conditions can be achieved spontaneously

    Incorporating Intelligence in Fish Feeding System for Dispensing Feed Based on Fish Feeding Intensity

    No full text
    The amount of feed dispense to match fish appetite plays a significant role in increasing fish cultivation. However, measuring the quantity of fish feed intake remains a critical challenge. To addressed this problem, this paper proposed an intelligent fish feeding regime system using fish behavioral vibration analysis and artificial neural networks. The model was developed using acceleration and angular velocity data obtained through a data logger that incorporated a triaxial accelerometer, magnetometer, and gyroscope for predicting fish behavioral activities. To improve the system accuracy, we developed a novel 8-directional Chain Code generator algorithm that extracts the vectors representing escape, swimming, and feeding activities. The set of sequence vectors extracted was further processed using Discrete Fourier Transform, and then the Fourier Descriptors of the individual activity representations were computed. These Fourier Descriptors are fed into an artificial neural network, the results of which are evaluated and compared with the Fourier Descriptors obtained directly from the acceleration and angular velocity data. The results show that the developed model using Fourier Descriptors obtained from Chain Code has an accuracy of 100%. In comparison, the developed classifier using Fourier Descriptors obtained directly from the fish movements acceleration, and angular velocity has an accuracy of 35.60%. These results showed that the proposed system could be used in dispensing feeds successfully without human intervention based on the fish requirements

    Simple deterministic selection-based genetic algorithm for hyperparameter tuning of machine learning models.

    No full text
    Hyperparameter tuning is a critical function necessary for the effective deployment of most machine learning (ML) algorithms. It is used to find the optimal hyperparameter settings of an ML algorithm in order to improve its overall output performance. To this effect, several optimization strategies have been studied for fine-tuning the hyperparameters of many ML algorithms, especially in the absence of model-specific information. However, because most ML training pro-cedures need a significant amount of computational time and memory, it is frequently necessary to build an optimization technique that converges within a small number of fitness evaluations. As a result, a simple deterministic selection genetic algorithm (SDSGA) is proposed in this article. The SDSGA was realized by ensuring that both chromosomes and their accompanying fitness values in the original genetic algorithm are selected in an elitist-like way. We assessed the SDSGA over a variety of mathematical test functions. It was then used to optimize the hyperparameters of two well-known machine learning models, namely, the convolutional neural network (CNN) and the random forest (RF) algorithm, with application on the MNIST and UCI classification datasets. The SDSGA’s efficiency was compared to that of the Bayesian Optimization (BO) and three other popular metaheuristic optimization algorithms (MOAs), namely, the genetic algorithm (GA), particle swarm optimization (PSO) and biogeography-based optimization (BBO) algorithms. The results obtained re-veal that the SDSGA performed better than the other MOAs in solving 11 of the 17 known benchmark functions considered in our study. While optimizing the hyperparameters of the two ML models, it performed marginally better in terms of accuracy than the other methods while taking less time to compute
    corecore