9,814 research outputs found
Identifying vulnerabilities of industrial control systems using evolutionary multiobjective optimisation
In this paper, we propose a novel methodology to assist in identifying vulnerabilities in real-world complex heterogeneous industrial control systems (ICS) using two Evolutionary Multiobjective Optimisation (EMO) algorithms, NSGA-II and SPEA2. Our approach is evaluated on a well-known benchmark chemical plant simulator, the Tennessee Eastman (TE) process model. We identified vulnerabilities in individual components of the TE model and then made use of these vulnerabilities to generate combinatorial attacks. The generated attacks were aimed at compromising the safety of the system and inflicting economic loss. Results were compared against random attacks, and the performance of the EMO algorithms was evaluated using hypervolume, spread, and inverted generational distance (IGD) metrics. A defence against these attacks in the form of a novel intrusion detection system was developed, using machine learning algorithms. The designed approach was further tested against the developed detection methods. The obtained results demonstrate that the developed EMO approach is a promising tool in the identification of the vulnerable components of ICS, and weaknesses of any existing detection systems in place to protect the system. The proposed approach can serve as a proactive defense tool for control and security engineers to identify and prioritise vulnerabilities in the system. The approach can be employed to design resilient control strategies and test the effectiveness of security mechanisms, both in the design stage and during the operational phase of the system
On the Generation of Realistic and Robust Counterfactual Explanations for Algorithmic Recourse
This recent widespread deployment of machine learning algorithms presents many new challenges. Machine learning algorithms are usually opaque and can be particularly difficult to interpret. When humans are involved, algorithmic and automated decisions can negatively impact people’s lives. Therefore, end users would like to be insured against potential harm. One popular way to achieve this is to provide end users access to algorithmic recourse, which gives end users negatively affected by algorithmic decisions the opportunity to reverse unfavorable decisions, e.g., from a loan denial to a loan acceptance. In this thesis, we design recourse algorithms to meet various end user needs. First, we propose methods for the generation of realistic recourses. We use generative models to suggest recourses likely to occur under the data distribution. To this end, we shift the recourse action from the input space to the generative model’s latent space, allowing to generate counterfactuals that lie in regions with data support. Second, we observe that small changes applied to the recourses prescribed to end users likely invalidate the suggested recourse after being nosily implemented in practice. Motivated by this observation, we design methods for the generation of robust recourses and for assessing the robustness of recourse algorithms to data deletion requests. Third, the lack of a commonly used code-base for counterfactual explanation and algorithmic recourse algorithms and the vast array of evaluation measures in literature make it difficult to compare the per formance of different algorithms. To solve this problem, we provide an open source benchmarking library that streamlines the evaluation process and can be used for benchmarking, rapidly developing new methods, and setting up new
experiments. In summary, our work contributes to a more reliable interaction of end users and machine learned models by covering fundamental aspects of the recourse process and suggests new solutions towards generating realistic and robust counterfactual explanations for algorithmic recourse
CFD Modelling of the Mixture Preparation in a Modern Gasoline Direct Injection Engine and Correlations with Experimental PN Emissions
A detailed 3D CFD analysis of a modern gasoline direct injection (GDI) engine is carried
out to reveal the connections between pre-combustion mixture indicators and PN emissions.
Firstly, a novel calibration methodology is introduced to accurately predict the widely
used characteristics of the high-pressure fuel spray. The methodology utilised the Siemens
STAR-CD 3D CFD software environment and employed a combination of statistical and
optimization methods supported by experimental data. The calibration process identified dominant
factors influencing spray properties and established their optimal levels. The two most
used models for fuel atomisation were investigated. The Kelvin–Helmholtz/Rayleigh–Taylor
(KH–RT) and Reitz–Diwakar (RD) break-up models were calibrated in conjunction with
the Rosin–Rammler (RR) mono-modal droplet size distribution. RD outperformed KH–RT
in terms of prediction when comparing numerical spray tip penetration and droplet size
characteristics to the experimental counterparts. Then, the modelling protocol incorporated
droplet-wall interaction models and a multi-component surrogate fuel blend model. The
comprehensive digital model was validated using published data and applied to a modern
small-capacity GDI engine. The study explored various engine operating conditions and
highlights the contribution of fuel mal-distribution and liquid film retention at spark timing
to Particle Number (PN) emissions. Finally, a novel surrogate model was developed to
predict the engine-out PN. An extensive CFD analysis was conducted considering part-load
operating conditions and variations of engine control variables. The PN surrogate model
was developed using an Elastic Net (EN) regression technique, establishing relationships
between experimental PN emission levels and modelled, pre-combustion, air-fuel mixture
quality indicators. The approach enabled the reliable prediction of engine sooting tendencies
without relying on complex measurements of combustion characteristics. These research
efforts aim to enhance engine efficiency, reduce emissions, and contribute to the development
of a reliable and cost-effective digital toolset for engine development and diagnostics
Recommended from our members
A novel optimal allocation of STATCOM to enhance voltage stability in power networks
Crown Copyright © 2024 The Authors. Utilizing a static synchronous compensator (STATCOM) in the electrical power grid greatly improves the grid's voltage profile by enhancing voltage stability. This article proposes a novel approach based on Mixed Integer Distributed Ant Colony Optimization (MIDACO) to determine the optimal STATCOM installation in the electrical power grid. This approach has two control variables to optimize: the STATCOM size and location. This optimization aims to enhance voltage stability with minimum cost by minimizing two objectives: the voltage deviation index and the STATCOM cost. Also, this article presents a sensitivity analysis to show the stochastic nature of MIDACO and to explain the effect of MIDACO parameters on the optimization approach and the process of reaching the optimal solution. The proposed method has been evaluated on three standard test systems: IEEE 14-bus, IEEE 57-bus, and IEEE 118-bus. In addition, the MIDACO results are compared to those of the artificial bee colony algorithm, the genetic algorithm, and particle swarm optimization.This research did not receive any specific grant from funding agencies in the public, commercial, or not-for-profit sectors
jMetal and MFHS Collaboration for Task Scheduling Optimization in Heterogeneous Distributed System
Task scheduling in distributed computing architectures has attracted considerable research interest, leading to the development of numerous algorithms aiming to approach optimal solutions. However, most of these algorithms remain confined to simulation environments and are rarely applied in real-world settings. In a previous study, we introduced the MFHS framework, which facilitates the transition of scheduling algorithms from simulation to practical deployment. Unfortunately, MFHS currently offers a limited selection of scheduling heuristics. In this work, we address this limitation by presenting the MFHS_jMetal framework, integrating the extensive task scheduling algorithms available in the well-established jMetal framework. Our implementation demonstrates the successful expansion of available scheduling algorithms while preserving the core characteristics of MFHS, bridging the gap between theoretical models and real-world deployment
Resource-aware scheduling for 2D/3D multi-/many-core processor-memory systems
This dissertation addresses the complexities of 2D/3D multi-/many-core processor-memory systems, focusing on two key areas: enhancing timing predictability in real-time multi-core processors and optimizing performance within thermal constraints. The integration of an increasing number of transistors into compact chip designs, while boosting computational capacity, presents challenges in resource contention and thermal management. The first part of the thesis improves timing predictability. We enhance shared cache interference analysis for set-associative caches, advancing the calculation of Worst-Case Execution Time (WCET). This development enables accurate assessment of cache interference and the effectiveness of partitioned schedulers in real-world scenarios. We introduce TCPS, a novel task and cache-aware partitioned scheduler that optimizes cache partitioning based on task-specific WCET sensitivity, leading to improved schedulability and predictability. Our research explores various cache and scheduling configurations, providing insights into their performance trade-offs. The second part focuses on thermal management in 2D/3D many-core systems. Recognizing the limitations of Dynamic Voltage and Frequency Scaling (DVFS) in S-NUCA many-core processors, we propose synchronous thread migrations as a thermal management strategy. This approach culminates in the HotPotato scheduler, which balances performance and thermal safety. We also introduce 3D-TTP, a transient temperature-aware power budgeting strategy for 3D-stacked systems, reducing the need for Dynamic Thermal Management (DTM) activation. Finally, we present 3QUTM, a novel method for 3D-stacked systems that combines core DVFS and memory bank Low Power Modes with a learning algorithm, optimizing response times within thermal limits. This research contributes significantly to enhancing performance and thermal management in advanced processor-memory systems
Optimization of Construction Projects Time-Cost-Quality-Environment Trade-off Problem Using Adaptive Selection Slime Mold Algorithm
In order to address optimization problems, artificial intelligence (AI) is employed in the construction industry, which aids in the growth and popularization of AI. This study utilizes a Hybrid algorithm called Adaptive Selection Slime Mold Algorithm (ASSMA), which combines the Tournament Selection (TS) and Slime Mould Algorithm (SMA) to address the four-factor optimization problem in projects. This combination will improve the original algorithm's performance, speed up result finding and achieve good convergence via Pareto Front. Hence, efficient resource management must be comprehended in order to optimize the time, cost, quality and environmental impact trade-off (TCQE). Case studies are used to illustrate the capabilities of the new model, and ASSMA results are compared to those of the data envelopment analysis (DEA) method used by the previous researcher. To improve the suggested model's superiority and effectiveness, it is compared to the multiple-target swarm algorithm (MOPSO), multi-objective artificial bee colony (MOABC) and non-dominant sort genetic algorithm (NSGA-II). Based on the overall results, it is clear that the ASSMA model illustrates diversification and offers a robust and convincing optimal solution for readers to understand the potential of the proposed model
Natural and Technological Hazards in Urban Areas
Natural hazard events and technological accidents are separate causes of environmental impacts. Natural hazards are physical phenomena active in geological times, whereas technological hazards result from actions or facilities created by humans. In our time, combined natural and man-made hazards have been induced. Overpopulation and urban development in areas prone to natural hazards increase the impact of natural disasters worldwide. Additionally, urban areas are frequently characterized by intense industrial activity and rapid, poorly planned growth that threatens the environment and degrades the quality of life. Therefore, proper urban planning is crucial to minimize fatalities and reduce the environmental and economic impacts that accompany both natural and technological hazardous events
Two-Dimensional-Based Hybrid Shape Optimisation of a 5-Element Formula 1 Race Car Front Wing under FIA Regulations
Front wings are a key element in the aerodynamic performance of Formula 1 race cars. Thus, their optimisation makes an important contribution to the performance of cars in races. However, their design is constrained by regulation, which makes it more difficult to find good designs. The present work develops a hybrid shape optimisation approach to obtain an optimal five-element airfoil front wing under the FIA regulations and 17 design parameters. A first baseline design is obtained by parametric optimisation, on which the adjoint method is applied for shape optimisation via Mesh Morphing with Radial Basis Functions. The optimal front wing candidate obtained outperforms the parametric baseline up to a 25% at certain local positions. This shows that the proposed and tested hybrid approach can be a very efficient alternative. Although a direct 3D optimisation approach could be developed, the computational costs would be dramatically increased (possibly unaffordable for such a complex five-element front wing realistic shape with 17 design parameters and regulatory constraints). Thus, the present approach is of strong interest if the computational budget is low and/or a fast new front wing design is desired, which is a frequent scenario in Formula 1 race car design.The authors want to acknowledge the financial support from the Ramón y Cajal 2021 Excellence Research Grant action from the Spanish Ministry of Science and Innovation (FSE/AGENCIA ESTATAL DE INVESTIGACIÓN), the UMA18-FEDERJA-184 grant, and the Andalusian Research, Development and Innovation Plan (PAIDI—Junta de Andalucia) fundings. Partial funding for open access charge: Universidad de Málag
Performance and Competitiveness of Tree-Based Pipeline Optimization Tool
Dissertation presented as the partial requirement for obtaining a Master's degree in Data Science and Advanced Analytics, specialization in Data ScienceAutomated machine learning (AutoML) is the process of automating the entire machine learn-ing workflow when applied to real-world problems. AutoML can increase data science produc-tivity while keeping the same performance and accuracy, allowing non-experts to use complex machine learning methods. Tree-based Pipeline Optimization Tool (TPOT) was one of the first AutoML methods created by data scientists and is targeted to optimize machine learning pipe-lines using genetic programming. While still under active development, TPOT is a very prom-ising AutoML tool. This Thesis aims to explore the algorithm and analyse its performance using real word data. Results show that evolution-based optimization is at least as accurate as TPOT initialization. The effectiveness of genetic operators, however, depends on the nature of the test case
- …