7,304 research outputs found
Volumetric Techniques for Product Routing and Loading Optimisation in Industry 4.0: A Review
Industry 4.0 has become a crucial part in the majority of processes, components, and related modelling, as well as predictive tools that allow a more efficient, automated and sustainable approach to industry. The availability of large quantities of data, and the advances in IoT, AI, and data-driven frameworks, have led to an enhanced data gathering, assessment, and extraction of actionable information, resulting in a better decision-making process. Product picking and its subsequent packing is an important area, and has drawn increasing attention for the research community. However, depending of the context, some of the related approaches tend to be either highly mathematical, or applied to a specific context. This article aims to provide a survey on the main methods, techniques, and frameworks relevant to product packing and to highlight the main properties and features that should be further investigated to ensure a more efficient and optimised approach
Identifying vulnerabilities of industrial control systems using evolutionary multiobjective optimisation
In this paper, we propose a novel methodology to assist in identifying vulnerabilities in real-world complex heterogeneous industrial control systems (ICS) using two Evolutionary Multiobjective Optimisation (EMO) algorithms, NSGA-II and SPEA2. Our approach is evaluated on a well-known benchmark chemical plant simulator, the Tennessee Eastman (TE) process model. We identified vulnerabilities in individual components of the TE model and then made use of these vulnerabilities to generate combinatorial attacks. The generated attacks were aimed at compromising the safety of the system and inflicting economic loss. Results were compared against random attacks, and the performance of the EMO algorithms was evaluated using hypervolume, spread, and inverted generational distance (IGD) metrics. A defence against these attacks in the form of a novel intrusion detection system was developed, using machine learning algorithms. The designed approach was further tested against the developed detection methods. The obtained results demonstrate that the developed EMO approach is a promising tool in the identification of the vulnerable components of ICS, and weaknesses of any existing detection systems in place to protect the system. The proposed approach can serve as a proactive defense tool for control and security engineers to identify and prioritise vulnerabilities in the system. The approach can be employed to design resilient control strategies and test the effectiveness of security mechanisms, both in the design stage and during the operational phase of the system
A Survey on Forensics and Compliance Auditing for Critical Infrastructure Protection
The broadening dependency and reliance that modern societies have on essential services
provided by Critical Infrastructures is increasing the relevance of their trustworthiness. However, Critical
Infrastructures are attractive targets for cyberattacks, due to the potential for considerable impact, not just
at the economic level but also in terms of physical damage and even loss of human life. Complementing
traditional security mechanisms, forensics and compliance audit processes play an important role in ensuring
Critical Infrastructure trustworthiness. Compliance auditing contributes to checking if security measures are
in place and compliant with standards and internal policies. Forensics assist the investigation of past security
incidents. Since these two areas significantly overlap, in terms of data sources, tools and techniques, they can
be merged into unified Forensics and Compliance Auditing (FCA) frameworks. In this paper, we survey the
latest developments, methodologies, challenges, and solutions addressing forensics and compliance auditing
in the scope of Critical Infrastructure Protection. This survey focuses on relevant contributions, capable of
tackling the requirements imposed by massively distributed and complex Industrial Automation and Control
Systems, in terms of handling large volumes of heterogeneous data (that can be noisy, ambiguous, and
redundant) for analytic purposes, with adequate performance and reliability. The achieved results produced
a taxonomy in the field of FCA whose key categories denote the relevant topics in the literature. Also, the
collected knowledge resulted in the establishment of a reference FCA architecture, proposed as a generic
template for a converged platform. These results are intended to guide future research on forensics and
compliance auditing for Critical Infrastructure Protection.info:eu-repo/semantics/publishedVersio
A foundation for synthesising programming language semantics
Programming or scripting languages used in real-world systems are seldom designed
with a formal semantics in mind from the outset. Therefore, the first step for developing well-founded analysis tools for these systems is to reverse-engineer a formal
semantics. This can take months or years of effort.
Could we automate this process, at least partially? Though desirable, automatically reverse-engineering semantics rules from an implementation is very challenging,
as found by Krishnamurthi, Lerner and Elberty. They propose automatically learning
desugaring translation rules, mapping the language whose semantics we seek to a simplified, core version, whose semantics are much easier to write. The present thesis
contains an analysis of their challenge, as well as the first steps towards a solution.
Scaling methods with the size of the language is very difficult due to state space
explosion, so this thesis proposes an incremental approach to learning the translation
rules. I present a formalisation that both clarifies the informal description of the challenge by Krishnamurthi et al, and re-formulates the problem, shifting the focus to the
conditions for incremental learning. The central definition of the new formalisation is
the desugaring extension problem, i.e. extending a set of established translation rules
by synthesising new ones.
In a synthesis algorithm, the choice of search space is important and non-trivial,
as it needs to strike a good balance between expressiveness and efficiency. The rest
of the thesis focuses on defining search spaces for translation rules via typing rules.
Two prerequisites are required for comparing search spaces. The first is a series of
benchmarks, a set of source and target languages equipped with intended translation
rules between them. The second is an enumerative synthesis algorithm for efficiently
enumerating typed programs. I show how algebraic enumeration techniques can be applied to enumerating well-typed translation rules, and discuss the properties expected
from a type system for ensuring that typed programs be efficiently enumerable.
The thesis presents and empirically evaluates two search spaces. A baseline search
space yields the first practical solution to the challenge. The second search space is
based on a natural heuristic for translation rules, limiting the usage of variables so that
they are used exactly once. I present a linear type system designed to efficiently enumerate translation rules, where this heuristic is enforced. Through informal analysis
and empirical comparison to the baseline, I then show that using linear types can speed
up the synthesis of translation rules by an order of magnitude
An Integrated Deep Learning Model with Genetic Algorithm (GA) for Optimal Syngas Production Using Dry Reforming of Methane (DRM)
The dry reforming of methane is a chemical process transforming two primary sources of greenhouse gases, i.e., carbon dioxide (CO2) and methane (CH4), into syngas, a versatile precursor in the industry, which has gained significant attention over the past decades. Nonetheless, commercial development of this eco-friendly process faces barriers such as catalyst deactivation and high energy demand. Artificial intelligence (AI), specifically deep learning, accelerates the development of this process by providing advanced analytics. However, deep learning requires substantial training samples and collecting data on a bench scale encounters cost and physical constraints. This study fills this research gap by employing a pretraining approach, which is invaluable for small datasets. It introduces a software sensor for regression (SSR) powered by deep learning to estimate the quality parameters of the process. Moreover, combining the SSR with a genetic algorithm offers a prescriptive analysis, suggesting optimal thermodynamic parameters to improve the process efficiency
Multi-epoch machine learning for galaxy formation
In this thesis I utilise a range of machine learning techniques in conjunction with hydrodynamical cosmological simulations. In Chapter 2 I present a novel machine learning method for predicting the baryonic properties of dark matter only subhalos taken from N-body simulations. The model is built using a tree-based algorithm and incorporates subhalo properties over a wide range of redshifts as its input features. I train the model using a hydrodynamical simulation which enables it to predict black hole mass, gas mass, magnitudes, star formation rate, stellar mass, and metallicity. This new model surpasses the performance of previous models. Furthermore, I explore the predictive power of each input property by looking at feature importance scores from the tree-based model. By applying the method to the LEGACY N-body simulation I generate a large volume mock catalog of the quasar population at z=3. By comparing this mock catalog with observations, I demonstrate that the IllustrisTNG subgrid model for black holes is not accurately capturing the growth of the most massive objects. In Chapter 3 I apply my method to investigate the evolution of galaxy properties in different simulations, and in various environments within a single simulation. By comparing the Illustris, EAGLE, and TNG simulations I show that subgrid model physics plays a more significant role than the choice of hydrodynamics method. Using the CAMELS simulation suite I consider the impact of cosmological and astrophysical parameters on the buildup of stellar mass within the TNG and SIMBA models.
In the final chapter I apply a combination of neural networks and symbolic regression methods to construct a semi-analytic model which reproduces the galaxy population from a cosmological simulation. The neural network based approach is capable of producing a more accurate population than a previous method of binning based on halo mass. The equations resulting from symbolic regression are found to be a good approximation of the neural network
Recommended from our members
Pattern recognition in the nucleation kinetics of non-equilibrium self-assembly
Inspired by biology’s most sophisticated computer, the brain, neural networks constitute a profound reformulation of computational principles. Analogous high-dimensional, highly interconnected computational architectures also arise within information-processing molecular systems inside living cells, such as signal transduction cascades and genetic regulatory networks. Might collective modes analogous to neural computation be found more broadly in other physical and chemical processes, even those that ostensibly play non-information-processing roles? Here we examine nucleation during self-assembly of multicomponent structures, showing that high-dimensional patterns of concentrations can be discriminated and classified in a manner similar to neural network computation. Specifically, we design a set of 917 DNA tiles that can self-assemble in three alternative ways such that competitive nucleation depends sensitively on the extent of colocalization of high-concentration tiles within the three structures. The system was trained in silico to classify a set of 18 grayscale 30 × 30 pixel images into three categories. Experimentally, fluorescence and atomic force microscopy measurements during and after a 150 hour anneal established that all trained images were correctly classified, whereas a test set of image variations probed the robustness of the results. Although slow compared to previous biochemical neural networks, our approach is compact, robust and scalable. Our findings suggest that ubiquitous physical phenomena, such as nucleation, may hold powerful information-processing capabilities when they occur within high-dimensional multicomponent systems
The AddACO: A bio-inspired modified version of the ant colony optimization algorithm to solve travel salesman problems
The Travel Salesman Problem (TSP) consists in finding the minimal-length closed tour that connects the entire group of nodes of a given graph. We propose to solve such a combinatorial optimization problem with the AddACO algorithm: it is a version of the Ant Colony Optimization method that is characterized by a modified probabilistic law at the basis of the exploratory movement of the artificial insects. In particular, the ant decisional rule is here set to amount in a linear convex combination of competing behavioral stimuli and has therefore an additive form (hence the name of our algorithm), rather than the canonical multiplicative one. The AddACO intends to address two conceptual shortcomings that characterize classical ACO methods: (i) the population of artificial insects is in principle allowed to simultaneously minimize/maximize all migratory guidance cues (which is in implausible from a biological/ecological point of view) and (ii) a given edge of the graph has a null probability to be explored if at least one of the movement trait is therein equal to zero, i.e., regardless the intensity of the others (this in principle reduces the exploratory potential of the ant colony). Three possible variants of our method are then specified: the AddACO-V1, which includes pheromone trail and visibility as insect decisional variables, and the AddACO-V2 and the AddACO-V3, which in turn add random effects and inertia, respectively, to the two classical migratory stimuli. The three versions of our algorithm are tested on benchmark middle-scale TPS instances, in order to assess their performance and to find their optimal parameter setting. The best performing variant is finally applied to large-scale TSPs, compared to the naive Ant-Cycle Ant System, proposed by Dorigo and colleagues, and evaluated in terms of quality of the solutions, computational time, and convergence speed. The aim is in fact to show that the proposed transition probability, as long as its conceptual advantages, is competitive from a performance perspective, i.e., if it does not reduce the exploratory capacity of the ant population w.r.t. the canonical one (at least in the case of selected TSPs). A theoretical study of the asymptotic behavior of the AddACO is given in the appendix of the work, whose conclusive section contains some hints for further improvements of our algorithm, also in the perspective of its application to other optimization problems
- …