588 research outputs found
Oil and Gas flow Anomaly Detection on offshore naturally flowing wells using Deep Neural Networks
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceThe Oil and Gas industry, as never before, faces multiple challenges. It is being impugned for being
dirty, a pollutant, and hence the more demand for green alternatives. Nevertheless, the world still has
to rely heavily on hydrocarbons, since it is the most traditional and stable source of energy, as opposed
to extensively promoted hydro, solar or wind power. Major operators are challenged to produce the
oil more efficiently, to counteract the newly arising energy sources, with less of a climate footprint,
more scrutinized expenditure, thus facing high skepticism regarding its future. It has to become
greener, and hence to act in a manner not required previously.
While most of the tools used by the Hydrocarbon E&P industry is expensive and has been used for
many years, it is paramount for the industry’s survival and prosperity to apply predictive maintenance
technologies, that would foresee potential failures, making production safer, lowering downtime,
increasing productivity and diminishing maintenance costs. Many efforts were applied in order to
define the most accurate and effective predictive methods, however data scarcity affects the speed
and capacity for further experimentations. Whilst it would be highly beneficial for the industry to invest
in Artificial Intelligence, this research aims at exploring, in depth, the subject of Anomaly Detection,
using the open public data from Petrobras, that was developed by experts.
For this research the Deep Learning Neural Networks, such as Recurrent Neural Networks with LSTM
and GRU backbones, were implemented for multi-class classification of undesirable events on naturally
flowing wells. Further, several hyperparameter optimization tools were explored, mainly focusing on
Genetic Algorithms as being the most advanced methods for such kind of tasks.
The research concluded with the best performing algorithm with 2 stacked GRU and the following
vector of hyperparameters weights: [1, 47, 40, 14], which stand for timestep 1, number of hidden units
47, number of epochs 40 and batch size 14, producing F1 equal to 0.97%.
As the world faces many issues, one of which is the detrimental effect of heavy industries to the
environment and as result adverse global climate change, this project is an attempt to contribute to
the field of applying Artificial Intelligence in the Oil and Gas industry, with the intention to make it
more efficient, transparent and sustainable
Computational Intelligence for Modeling, Control, Optimization, Forecasting and Diagnostics in Photovoltaic Applications
This book is a Special Issue Reprint edited by Prof. Massimo Vitelli and Dr. Luigi Costanzo. It contains original research articles covering, but not limited to, the following topics: maximum power point tracking techniques; forecasting techniques; sizing and optimization of PV components and systems; PV modeling; reconfiguration algorithms; fault diagnosis; mismatching detection; decision processes for grid operators
A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications
Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms
IoT in smart communities, technologies and applications.
Internet of Things is a system that integrates different devices and technologies, removing the necessity of human intervention. This enables the capacity of having smart (or smarter) cities around the world. By hosting different technologies and allowing interactions between them, the internet of things has spearheaded the development of smart city systems for sustainable living, increased comfort and productivity for citizens. The Internet of Things (IoT) for Smart Cities has many different domains and draws upon various underlying systems for its operation, in this work, we provide a holistic coverage of the Internet of Things in Smart Cities by discussing the fundamental components that make up the IoT Smart City landscape, the technologies that enable these domains to exist, the most prevalent practices and techniques which are used in these domains as well as the challenges that deployment of IoT systems for smart cities encounter and which need to be addressed for ubiquitous use of smart city applications. It also presents a coverage of optimization methods and applications from a smart city perspective enabled by the Internet of Things. Towards this end, a mapping is provided for the most encountered applications of computational optimization within IoT smart cities for five popular optimization methods, ant colony optimization, genetic algorithm, particle swarm optimization, artificial bee colony optimization and differential evolution. For each application identified, the algorithms used, objectives considered, the nature of the formulation and constraints taken in to account have been specified and discussed. Lastly, the data setup used by each covered work is also mentioned and directions for future work have been identified. Within the smart health domain of IoT smart cities, human activity recognition has been a key study topic in the development of cyber physical systems and assisted living applications. In particular, inertial sensor based systems have become increasingly popular because they do not restrict users’ movement and are also relatively simple to implement compared to other approaches. Fall detection is one of the most important tasks in human activity recognition. With an increasingly aging world population and an inclination by the elderly to live alone, the need to incorporate dependable fall detection schemes in smart devices such as phones, watches has gained momentum. Therefore, differentiating between falls and activities of daily living (ADLs) has been the focus of researchers in recent years with very good results. However, one aspect within fall detection that has not been investigated much is direction and severity aware fall detection. Since a fall detection system aims to detect falls in people and notify medical personnel, it could be of added value to health professionals tending to a patient suffering from a fall to know the nature of the accident. In this regard, as a case study for smart health, four different experiments have been conducted for the task of fall detection with direction and severity consideration on two publicly available datasets. These four experiments not only tackle the problem on an increasingly complicated level (the first one considers a fall only scenario and the other two a combined activity of daily living and fall scenario) but also present methodologies which outperform the state of the art techniques as discussed. Lastly, future recommendations have also been provided for researchers
An overview on structural health monitoring: From the current state-of-the-art to new bio-inspired sensing paradigms
In the last decades, the field of structural health monitoring (SHM) has grown exponentially. Yet, several technical constraints persist, which are preventing full realization of its potential. To upgrade current state-of-the-art technologies, researchers have started to look at nature’s creations giving rise to a new field called ‘biomimetics’, which operates across the border between living and non-living systems. The highly optimised and time-tested performance of biological assemblies keeps on inspiring the development of bio-inspired artificial counterparts that can potentially outperform conventional systems. After a critical appraisal on the current status of SHM, this paper presents a review of selected works related to neural, cochlea and immune-inspired algorithms implemented in the field of SHM, including a brief survey of the advancements of bio-inspired sensor technology for the purpose of SHM. In parallel to this engineering progress, a more in-depth understanding of the most suitable biological patterns to be transferred into multimodal SHM systems is fundamental to foster new scientific breakthroughs. Hence, grounded in the dissection of three selected human biological systems, a framework for new bio-inspired sensing paradigms aimed at guiding the identification of tailored attributes to transplant from nature to SHM is outlined.info:eu-repo/semantics/acceptedVersio
Industry 4.0—from Smart Factory to Cognitive Cyberphysical Production System and Cloud Manufacturing
This book focuses on recent developments in new industrial platforms, with Industry 4.0 on its way to becoming Industry 5.0. The book covers smart decision support systems for green and sustainable machining, microscale machining, cyber-physical production networks, and the optimization of assembly lines. The modern multiobjective algorithms and multicriteria decision-making methods are applied to various real-world industrial problems. The emerging problem of cybersecurity in advanced technologies is addressed as well
Recommended from our members
Multi particle swarm optimisation algorithm applied to supervisory power control systems
This thesis was submitted for the degree of Doctor of Philosophy and awarded by Brunel University LondonPower quality problems come in numerous forms (commonly spikes, surges, sags, outages and harmonics) and their resolution can cost from a few hundred to millions of pounds, depending on the size and type of problem experienced by the power network. They are commonly experienced as burnt-out motors, corrupt data on hard drives, unnecessary downtime and increased maintenance costs. In order to minimise such events, the network can be monitored and controlled with a specific control regime to deal with particular faults. This study developed a control and Optimisation system and applied it to the stability of electrical power networks using artificial intelligence techniques. An intelligent controller was designed to control and optimise simulated models for electrical system power stability. Fuzzy logic controller controlled the power generation, while particle swarm Optimisation (PSO) techniques optimised the system’s power quality in normal operation conditions and after faults. Different types of PSO were tested, then a multi-swarm (M-PSO) system was developed to give better Optimisation results in terms of accuracy and convergence speed.. The developed Optimisation algorithm was tested on seven benchmarks and compared to the other types of single PSOs.
The developed controller and Optimisation algorithm was applied to power system stability control. Two power electrical network models were used (with two and four generators), controlled by fuzzy logic controllers tuned using the Optimisation algorithm. The system selected the optimal controller parameters automatically for normal and fault conditions during the operation of the power network. Multi objective cost function was used based on minimising the recovery time, overshoot, and steady state error. A supervisory control layer was introduced to detect and diagnose faults then apply the correct controller parameters. Different fault scenarios were used to test the system performance. The results indicate the great potential of the proposed power system stabiliser as a superior tool compared to conventional control systems
Analysis of physiological signals using machine learning methods
Technological advances in data collection enable scientists to suggest novel approaches, such as Machine Learning algorithms, to process and make sense of this information. However, during this process of collection, data loss and damage can occur for reasons such as faulty device sensors or miscommunication. In the context of time-series data such as multi-channel bio-signals, there is a possibility of losing a whole channel. In such cases, existing research suggests imputing the missing parts when the majority of data is available. One way of understanding and classifying complex signals is by using deep neural networks. The hyper-parameters of such models have been optimised using the process of back propagation. Over time, improvements have been suggested to enhance this algorithm. However, an essential drawback of the back propagation can be the sensitivity to noisy data. This thesis proposes two novel approaches to address the missing data challenge and back propagation drawbacks: First, suggesting a gradient-free model in order to discover the optimal hyper-parameters of a deep neural network. The complexity of deep networks and high-dimensional optimisation parameters presents challenges to find a suitable network structure and hyper-parameter configuration. This thesis proposes the use of a minimalist swarm optimiser, Dispersive Flies Optimisation(DFO), to enable the selected model to achieve better results in comparison with the traditional back propagation algorithm in certain conditions such as limited number of training samples. The DFO algorithm offers a robust search process for finding and determining the hyper-parameter configurations. Second, imputing whole missing bio-signals within a multi-channel sample. This approach comprises two experiments, namely the two-signal and five-signal imputation models. The first experiment attempts to implement and evaluate the performance of a model mapping bio-signals from A toB and vice versa. Conceptually, this is an extension to transfer learning using CycleGenerative Adversarial Networks (CycleGANs). The second experiment attempts to suggest a mechanism imputing missing signals in instances where multiple data channels are available for each sample. The capability to map to a target signal through multiple source domains achieves a more accurate estimate for the target domain. The results of the experiments performed indicate that in certain circumstances, such as having a limited number of samples, finding the optimal hyper-parameters of a neural network using gradient-free algorithms outperforms traditional gradient-based algorithms, leading to more accurate classification results. In addition, Generative Adversarial Networks could be used to impute the missing data channels in multi-channel bio-signals, and the generated data used for further analysis and classification tasks
Monte Carlo Method with Heuristic Adjustment for Irregularly Shaped Food Product Volume Measurement
Volume measurement plays an important role in the production and processing of food products. Various methods have been
proposed to measure the volume of food products with irregular shapes based on 3D reconstruction. However, 3D reconstruction
comes with a high-priced computational cost. Furthermore, some of the volume measurement methods based on 3D reconstruction
have a low accuracy. Another method for measuring volume of objects uses Monte Carlo method. Monte Carlo method performs
volume measurements using random points. Monte Carlo method only requires information regarding whether random points
fall inside or outside an object and does not require a 3D reconstruction. This paper proposes volume measurement using a
computer vision system for irregularly shaped food products without 3D reconstruction based on Monte Carlo method with
heuristic adjustment. Five images of food product were captured using five cameras and processed to produce binary images.
Monte Carlo integration with heuristic adjustment was performed to measure the volume based on the information extracted from
binary images. The experimental results show that the proposed method provided high accuracy and precision compared to the
water displacement method. In addition, the proposed method is more accurate and faster than the space carving method
- …