84 research outputs found

    Colony-Enhanced Recurrent Neural Architecture Search: Collaborative Ant-Based Optimization

    Full text link
    Crafting neural network architectures manually is a formidable challenge often leading to suboptimal and inefficient structures. The pursuit of the perfect neural configuration is a complex task, prompting the need for a metaheuristic approach such as Neural Architecture Search (NAS). Drawing inspiration from the ingenious mechanisms of nature, this paper introduces Collaborative Ant-based Neural Topology Search (CANTS-N), pushing the boundaries of NAS and Neural Evolution (NE). In this innovative approach, ant-inspired agents meticulously construct neural network structures, dynamically adapting within a dynamic environment, much like their natural counterparts. Guided by Particle Swarm Optimization (PSO), CANTS-N's colonies optimize architecture searches, achieving remarkable improvements in mean squared error (MSE) over established methods, including BP-free CANTS, BP CANTS, and ANTS. Scalable, adaptable, and forward-looking, CANTS-N has the potential to reshape the landscape of NAS and NE. This paper provides detailed insights into its methodology, results, and far-reaching implications

    A Comprehensive Survey on Particle Swarm Optimization Algorithm and Its Applications

    Get PDF
    Particle swarm optimization (PSO) is a heuristic global optimization method, proposed originally by Kennedy and Eberhart in 1995. It is now one of the most commonly used optimization techniques. This survey presented a comprehensive investigation of PSO. On one hand, we provided advances with PSO, including its modifications (including quantum-behaved PSO, bare-bones PSO, chaotic PSO, and fuzzy PSO), population topology (as fully connected, von Neumann, ring, star, random, etc.), hybridization (with genetic algorithm, simulated annealing, Tabu search, artificial immune system, ant colony algorithm, artificial bee colony, differential evolution, harmonic search, and biogeography-based optimization), extensions (to multiobjective, constrained, discrete, and binary optimization), theoretical analysis (parameter selection and tuning, and convergence analysis), and parallel implementation (in multicore, multiprocessor, GPU, and cloud computing forms). On the other hand, we offered a survey on applications of PSO to the following eight fields: electrical and electronic engineering, automation control systems, communication theory, operations research, mechanical engineering, fuel and energy, medicine, chemistry, and biology. It is hoped that this survey would be beneficial for the researchers studying PSO algorithms

    Voltage optimization in active distribution networks—utilizing analytical and computational approaches in high renewable energy penetration environments

    Get PDF
    This review paper synthesizes the recent advancements in voltage regulation techniques for active distribution networks (ADNs), particularly in contexts with high renewable energy source (RES) penetration, using photovoltaics (PVs) as a highlighted example. It covers a comprehensive analysis of various innovative strategies and optimization algorithms aimed at mitigating voltage fluctuations, optimizing network performance, and integrating smart technologies like smart inverters and energy storage systems (ESSs). The review highlights key developments in decentralized control algorithms, multi-objective optimization techniques, and the integration of advanced technologies such as soft open points (SOPs) to enhance grid stability and efficiency. The paper categorizes these strategies into two main types: analytical methods and computational methods. In conclusion, this review underscores the critical need for advanced analytical and computational methods in the voltage regulation of ADNs with high renewable energy penetration levels, highlighting the potential for significant improvements in grid stability and efficiency

    Classification and Performance Study of Task Scheduling Algorithms in Cloud Computing Environment

    Get PDF
    Cloud computing is becoming very common in recent years and is growing rapidly due to its attractive benefits and features such as resource pooling, accessibility, availability, scalability, reliability, cost saving, security, flexibility, on-demand services, pay-per-use services, use from anywhere, quality of service, resilience, etc. With this rapid growth of cloud computing, there may exist too many users that require services or need to execute their tasks simultaneously by resources provided by service providers. To get these services with the best performance, and minimum cost, response time, makespan, effective use of resources, etc. an intelligent and efficient task scheduling technique is required and considered as one of the main and essential issues in the cloud computing environment. It is necessary for allocating tasks to the proper cloud resources and optimizing the overall system performance. To this end, researchers put huge efforts to develop several classes of scheduling algorithms to be suitable for the various computing environments and to satisfy the needs of the various types of individuals and organizations. This research article provides a classification of proposed scheduling strategies and developed algorithms in cloud computing environment along with the evaluation of their performance. A comparison of the performance of these algorithms with existing ones is also given. Additionally, the future research work in the reviewed articles (if available) is also pointed out. This research work includes a review of 88 task scheduling algorithms in cloud computing environment distributed over the seven scheduling classes suggested in this study. Each article deals with a novel scheduling technique and the performance improvement it introduces compared with previously existing task scheduling algorithms. Keywords: Cloud computing, Task scheduling, Load balancing, Makespan, Energy-aware, Turnaround time, Response time, Cost of task, QoS, Multi-objective. DOI: 10.7176/IKM/12-5-03 Publication date:September 30th 2022

    Exploring the adoption of a conceptual data analytics framework for subsurface energy production systems: a study of predictive maintenance, multi-phase flow estimation, and production optimization

    Get PDF
    Als die Technologie weiter fortschreitet und immer stärker in der Öl- und Gasindustrie integriert wird, steht eine enorme Menge an Daten in verschiedenen Wissenschaftsdisziplinen zur Verfügung, die neue Möglichkeiten bieten, informationsreiche und handlungsorientierte Informationen zu gewinnen. Die Konvergenz der digitalen Transformation mit der Physik des Flüssigkeitsflusses durch poröse Medien und Pipeline hat die Entwicklung und Anwendung von maschinellem Lernen (ML) vorangetrieben, um weiteren Mehrwert aus diesen Daten zu gewinnen. Als Folge hat sich die digitale Transformation und ihre zugehörigen maschinellen Lernanwendungen zu einem neuen Forschungsgebiet entwickelt. Die Transformation von Brownfields in digitale Ölfelder kann bei der Energieproduktion helfen, indem verschiedene Ziele erreicht werden, einschließlich erhöhter betrieblicher Effizienz, Produktionsoptimierung, Zusammenarbeit, Datenintegration, Entscheidungsunterstützung und Workflow-Automatisierung. Diese Arbeit zielt darauf ab, ein Rahmenwerk für diese Anwendungen zu präsentieren, insbesondere durch die Implementierung virtueller Sensoren, Vorhersageanalytik mithilfe von Vorhersagewartung für die Produktionshydraulik-Systeme (mit dem Schwerpunkt auf elektrischen Unterwasserpumpen) und präskriptiven Analytik für die Produktionsoptimierung in Dampf- und Wasserflutprojekten. In Bezug auf virtuelle Messungen ist eine genaue Schätzung von Mehrphasenströmen für die Überwachung und Verbesserung von Produktionsprozessen entscheidend. Diese Studie präsentiert einen datengetriebenen Ansatz zur Berechnung von Mehrphasenströmen mithilfe von Sensormessungen in elektrischen untergetauchten Pumpbrunnen. Es wird eine ausführliche exploratorische Datenanalyse durchgeführt, einschließlich einer Ein Variablen Studie der Zielausgänge (Flüssigkeitsrate und Wasseranteil), einer Mehrvariablen-Studie der Beziehungen zwischen Eingaben und Ausgaben sowie einer Datengruppierung basierend auf Hauptkomponentenprojektionen und Clusteralgorithmen. Feature Priorisierungsexperimente werden durchgeführt, um die einflussreichsten Parameter in der Vorhersage von Fließraten zu identifizieren. Die Modellvergleich erfolgt anhand des mittleren absoluten Fehlers, des mittleren quadratischen Fehlers und des Bestimmtheitskoeffizienten. Die Ergebnisse zeigen, dass die CNN-LSTM-Netzwerkarchitektur besonders effektiv bei der Zeitreihenanalyse von ESP-Sensordaten ist, da die 1D-CNN-Schichten automatisch Merkmale extrahieren und informative Darstellungen von Zeitreihendaten erzeugen können. Anschließend wird in dieser Studie eine Methodik zur Umsetzung von Vorhersagewartungen für künstliche Hebesysteme, insbesondere bei der Wartung von Elektrischen Untergetauchten Pumpen (ESP), vorgestellt. Conventional maintenance practices for ESPs require extensive resources and manpower, and are often initiated through reactive monitoring of multivariate sensor data. Um dieses Problem zu lösen, wird die Verwendung von Hauptkomponentenanalyse (PCA) und Extreme Gradient Boosting Trees (XGBoost) zur Analyse von Echtzeitsensordaten und Vorhersage möglicher Ausfälle in ESPs eingesetzt. PCA wird als unsupervised technique eingesetzt und sein Ausgang wird weiter vom XGBoost-Modell für die Vorhersage des Systemstatus verarbeitet. Das resultierende Vorhersagemodell hat gezeigt, dass es Signale von möglichen Ausfällen bis zu sieben Tagen im Voraus bereitstellen kann, mit einer F1-Bewertung größer als 0,71 im Testset. Diese Studie integriert auch Model-Free Reinforcement Learning (RL) Algorithmen zur Unterstützung bei Entscheidungen im Rahmen der Produktionsoptimierung. Die Aufgabe, die optimalen Injektionsstrategien zu bestimmen, stellt Herausforderungen aufgrund der Komplexität der zugrundeliegenden Dynamik, einschließlich nichtlinearer Formulierung, zeitlicher Variationen und Reservoirstrukturheterogenität. Um diese Herausforderungen zu bewältigen, wurde das Problem als Markov-Entscheidungsprozess reformuliert und RL-Algorithmen wurden eingesetzt, um Handlungen zu bestimmen, die die Produktion optimieren. Die Ergebnisse zeigen, dass der RL-Agent in der Lage war, den Netto-Barwert (NPV) durch kontinuierliche Interaktion mit der Umgebung und iterative Verfeinerung des dynamischen Prozesses über mehrere Episoden signifikant zu verbessern. Dies zeigt das Potenzial von RL-Algorithmen, effektive und effiziente Lösungen für komplexe Optimierungsprobleme im Produktionsbereich zu bieten.As technology continues to advance and become more integrated in the oil and gas industry, a vast amount of data is now prevalent across various scientific disciplines, providing new opportunities to gain insightful and actionable information. The convergence of digital transformation with the physics of fluid flow through porous media and pipelines has driven the advancement and application of machine learning (ML) techniques to extract further value from this data. As a result, digital transformation and its associated machine-learning applications have become a new area of scientific investigation. The transformation of brownfields into digital oilfields can aid in energy production by accomplishing various objectives, including increased operational efficiency, production optimization, collaboration, data integration, decision support, and workflow automation. This work aims to present a framework of these applications, specifically through the implementation of virtual sensing, predictive analytics using predictive maintenance on production hydraulic systems (with a focus on electrical submersible pumps), and prescriptive analytics for production optimization in steam and waterflooding projects. In terms of virtual sensing, the accurate estimation of multi-phase flow rates is crucial for monitoring and improving production processes. This study presents a data-driven approach for calculating multi-phase flow rates using sensor measurements located in electrical submersible pumped wells. An exhaustive exploratory data analysis is conducted, including a univariate study of the target outputs (liquid rate and water cut), a multivariate study of the relationships between inputs and outputs, and data grouping based on principal component projections and clustering algorithms. Feature prioritization experiments are performed to identify the most influential parameters in the prediction of flow rates. Model comparison is done using the mean absolute error, mean squared error and coefficient of determination. The results indicate that the CNN-LSTM network architecture is particularly effective in time series analysis for ESP sensor data, as the 1D-CNN layers are capable of extracting features and generating informative representations of time series data automatically. Subsequently, the study presented herein a methodology for implementing predictive maintenance on artificial lift systems, specifically regarding the maintenance of Electrical Submersible Pumps (ESPs). Conventional maintenance practices for ESPs require extensive resources and manpower and are often initiated through reactive monitoring of multivariate sensor data. To address this issue, the study employs the use of principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to analyze real-time sensor data and predict potential failures in ESPs. PCA is utilized as an unsupervised technique and its output is further processed by the XGBoost model for prediction of system status. The resulting predictive model has been shown to provide signals of potential failures up to seven days in advance, with an F1 score greater than 0.71 on the test set. In addition to the data-driven modeling approach, The present study also in- corporates model-free reinforcement learning (RL) algorithms to aid in decision-making in production optimization. The task of determining the optimal injection strategy poses challenges due to the complexity of the underlying dynamics, including nonlinear formulation, temporal variations, and reservoir heterogeneity. To tackle these challenges, the problem was reformulated as a Markov decision process and RL algorithms were employed to determine actions that maximize production yield. The results of the study demonstrate that the RL agent was able to significantly enhance the net present value (NPV) by continuously interacting with the environment and iteratively refining the dynamic process through multiple episodes. This showcases the potential for RL algorithms to provide effective and efficient solutions for complex optimization problems in the production domain. In conclusion, this study represents an original contribution to the field of data-driven applications in subsurface energy systems. It proposes a data-driven method for determining multi-phase flow rates in electrical submersible pumped (ESP) wells utilizing sensor measurements. The methodology includes conducting exploratory data analysis, conducting experiments to prioritize features, and evaluating models based on mean absolute error, mean squared error, and coefficient of determination. The findings indicate that a convolutional neural network-long short-term memory (CNN-LSTM) network is an effective approach for time series analysis in ESPs. In addition, the study implements principal component analysis (PCA) and extreme gradient boosting trees (XGBoost) to perform predictive maintenance on ESPs and anticipate potential failures up to a seven-day horizon. Furthermore, the study applies model-free reinforcement learning (RL) algorithms to aid decision-making in production optimization and enhance net present value (NPV)

    Interactive optimisation for high-lift design.

    Get PDF
    Interactivity always involves two entities; one of them by default is a human user. The specialised subject of human factors is introduced in the context of computational aerodynamics and optimisation, specifically a high-lift aerofoil. The trial and error nature of a design process hinges on designer’s knowledge, skill and intuition. A basic, important assumption of a man-machine system is that in solving a problem, there are some steps in which the computer has an advantageous edge while in other steps a human has dominance. Computational technologies are now an indispensable part of aerospace technology; algorithms involving significant user interaction, either during the process of generating solutions or as a component of post-optimisation evaluation where human decision making is involved are increasingly becoming popular, multi-objective particle swarm is one such optimiser. Several design optimisation problems in engineering are by nature multi-objective; the interest of a designer lies in simultaneous optimisation against two or more objectives which are usually in conflict. Interactive optimisation allows the designer to understand trade-offs between various objectives, and is generally used as a tool for decision making. The solution to a multi-objective problem, one where betterment in one objective occurs over the deterioration of at least one other objective is called a Pareto set. There are multiple solutions to a problem and multiple betterment ideas to an already existing design. The final responsibility of identifying an optimal solution or idea rests on the design engineers and decision making is done based on quantitative metrics, displayed as numbers or graphs. However, visualisation, ergonomics and human factors influence and impact this decision making process. A visual, graphical depiction of the Pareto front is oftentimes used as a design aid tool for purposes of decision making with chances of errors and fallacies fundamentally existing in engineering design. An effective visualisation tool benefits complex engineering analyses by providing the decision-maker with a good imagery of the most important information. Two high-lift aerofoil data-sets have been used as test-case examples; a multi-element solver, an optimiser based on swarm intelligence technique, and visual techniques which include parallel co-ordinates, heat map, scatter plot, self-organising map and radial coordinate visualisation comprise the module. Factors that affect optima and various evaluation criteria have been studied in light of the human user. This research enquires into interactive optimisation by adapting three interactive approaches: information trade-off, reference point and classification, and investigates selected visualisation techniques which act as chief aids in the context of high-lift design trade studies. Human-in-the-loop engineering, man-machine interaction & interface along with influencing factors, reliability, validation and verification in the presence of design uncertainty are considered. The research structure, choice of optimiser and visual aids adapted in this work are influenced by and streamlined to fit with the parallel on-going development work on Airbus’ Python based tool. Results, analysis, together with literature survey are presented in this report. The words human, user, engineer, aerodynamicist, designer, analyst and decision-maker/ DM are synonymous, and are used interchangeably in this research. In a virtual engineering setting, for an efficient interactive optimisation task, a suitable visualisation tool is a crucial prerequisite. Various optimisation design tools & methods are most useful when combined with a human engineer's insight is the underlying premise of this work; questions such as why, what, how might help aid aeronautical technical innovation.PhD in Aerospac

    Cultural particle swarm optimization

    Get PDF

    Parametric Optimization of Taper Cutting Process using Wire Electrical Discharge Machining (WEDM)

    Get PDF
    Significant technological advancement of wire electrical discharge machining (WEDM) process has been observed in recent times in order to meet the requirements of various manufacturing fields especially in the production of parts with complex geometry in precision die industry. Taper cutting is an important application of WEDM process aiming at generating complex parts with tapered profiles. Wire deformation and breakage are more pronounced in taper cutting as compared with straight cutting resulting in adverse effect on desired taper angle and surface integrity. The reasons for associated problems may be attributed to certain stiffness of the wire. However, controlling the process parameters can somewhat reduce these problems. Extensive literature review reveals that effect of process parameters on various performance measures in taper cutting using WEDM is also not adequately addressed. Hence, study on effect of process parameters on performance measures using various advanced metals and metal matrix composites (MMC) has become the predominant research area in this field. In this context, the present work attempts to experimentally investigate the machining performance of various alloys, super alloys and metal matrix composite during taper cutting using WEDM process. The effect of process parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension on various performance measures such as angular error, surface roughness, cutting rate and white layer thickness are studied using Taguchi’s analysis. The functional relationship between the input parameters and performance measures has been developed by using non-linear regression analysis. Simultaneous optimization of the performance measures has been carried out using latest nature inspired algorithms such as multi-objective particle swarm optimization (MOPSO) and bat algorithm. Although MOPSO develops a set of non-dominated solutions, the best ranked solution is identified from a large number of solutions through application of maximum deviation method rather than resorting to human judgement. Deep cryogenic treatment of both wire and work material has been carried out to enhance the machining efficiency of the low conductive work material like Inconel 718. Finally, artificial intelligent models are proposed to predict the various performance measures prior to machining. The study offers useful insight into controlling the parameters to improve the machining efficiency

    Parametric Optimization of Taper Cutting Process using Wire Electrical Discharge Machining (WEDM)

    Get PDF
    Significant technological advancement of wire electrical discharge machining (WEDM) process has been observed in recent times in order to meet the requirements of various manufacturing fields especially in the production of parts with complex geometry in precision die industry. Taper cutting is an important application of WEDM process aiming at generating complex parts with tapered profiles. Wire deformation and breakage are more pronounced in taper cutting as compared with straight cutting resulting in adverse effect on desired taper angle and surface integrity. The reasons for associated problems may be attributed to certain stiffness of the wire. However, controlling the process parameters can somewhat reduce these problems. Extensive literature review reveals that effect of process parameters on various performance measures in taper cutting using WEDM is also not adequately addressed. Hence, study on effect of process parameters on performance measures using various advanced metals and metal matrix composites (MMC) has become the predominant research area in this field. In this context, the present work attempts to experimentally investigate the machining performance of various alloys, super alloys and metal matrix composite during taper cutting using WEDM process. The effect of process parameters such as part thickness, taper angle, pulse duration, discharge current, wire speed and wire tension on various performance measures such as angular error, surface roughness, cutting rate and white layer thickness are studied using Taguchi’s analysis. The functional relationship between the input parameters and performance measures has been developed by using non-linear regression analysis. Simultaneous optimization of the performance measures has been carried out using latest nature inspired algorithms such as multi-objective particle swarm optimization (MOPSO) and bat algorithm. Although MOPSO develops a set of non-dominated solutions, the best ranked solution is identified from a large number of solutions through application of maximum deviation method rather than resorting to human judgement. Deep cryogenic treatment of both wire and work material has been carried out to enhance the machining efficiency of the low conductive work material like Inconel 718. Finally, artificial intelligent models are proposed to predict the various performance measures prior to machining. The study offers useful insight into controlling the parameters to improve the machining efficiency
    corecore